Ethnic Voters Targeted with Fake Images

Ethnic Voters

Vidya Sethuraman
India Post News Service

Artificial intelligence is supercharging threats to the election system, according to experts, officials and observers, as it gets easier to use AI and spread synthetic content that can stoke disinformation and confuse voters.

What used to take a studio budget and a production team to produce can be put together with a few clicks, according to news reports. The result is that every day voters try to navigate an election landscape where it’s difficult to gauge the authenticity of pictures, posts and videos – like fake AI images of black voters supporting President Trump that recently circulated widely. At the EMS briefing on July 12 experts monitored the rise of AI-empowered racialized messaging and reported on efforts to push new legislative controls before the 2024 elections.

Jonathan Mehta Stein, Executive Director of California Common Cause, a nonprofit watchdog agency, said that AI has many positive impacts, such as predicting where crime occurs, studying wind energy utilization and urban construction development trends. But at the same time, AI may also bring many problems, affecting the entire country and society. For example, in May 2023, an AI picture appeared on social platforms, showing that the Pentagon was attacked, which immediately caused the U.S. stock market to plummet.

He believes that as the presidential election approaches, a large number of AI pictures and videos will be mixed with election information, making it difficult to distinguish true from false. For example, AI can be used to easily create various pictures and videos to support candidates, or use the voices of electors to make recordings, and advertisements, attacking candidates from the other camp, etc. There are also risks in local elections.

Jinxia Niu, program manager for Chinese Digital Engagement, Chinese for
Affirmative Action, said that in the past 12 months, they have collected as many as 600 pieces of AI false information from various media and social platforms.

After the AI false information was exposed, many English-language media retracted these texts, but ethnic minority media rarely did subsequent retractions. She said that the biggest challenge in suppressing the spread of AI false information is that even if these false information pictures and videos are found, it is difficult to trace their source; secondly, the minority community has little understanding of AI fraud, especially those who do not speak English well. Elderly people have almost zero immunity to AI false information, which is one of the reasons why AI fraud has frequently succeeded among Chinese and Asian elderly people.

Brandon Silverman, former CEO and Co-Founder of CrowdTangle (now owned by Meta) believes that the impact of AI does not distinguish between ethnic groups. Some people believe that some information misleads the public, but this information does not constitute an error in terms of laws and rules. He believes that the development of AI around the world is unstoppable. AI may soon change all industries and all aspects of life. The key is how to manage it and avoid public misunderstandings about the spread of AI information.

0 - 0

Thank You For Your Vote!

Sorry You have Already Voted!