Latino Voters in Focus: AI-Supported Misinformation Threatens Fair Elections

  • AI models increasingly generate misinformation for Spanish-speaking voters.
  • Test analyses show a higher error rate in AI-generated Spanish content.

Eulerpool News·

In the final days leading up to the upcoming presidential election, Latin American voters in the USA are facing a flood of targeted Spanish-language advertisements. At the same time, a new dimension of political messaging in the age of artificial intelligence is causing concern: chatbots generating unfounded claims about voting rights in Spanish. According to an analysis by two nonprofit news organizations, election misinformation in Spanish occurs more frequently than in English. This complicates access to accurate information for one of the nation's fastest-growing and most influential voter groups. Voting rights organizations fear that AI models could further deepen disparities in information dissemination for Spanish-speaking voters, who are being courted by both Democrats and Republicans. To boost their presence, Vice President Kamala Harris will hold a rally this Thursday in Las Vegas with singer Jennifer Lopez and the Mexican band Maná. In contrast, former President Donald Trump recently held an event in a Hispanic region of Pennsylvania, despite the backlash from offensive comments about Puerto Rico at a previous New York rally. Proof News and Factchequeado, together with the Science, Technology and Social Values Lab at the Institute for Advanced Study, tested how popular AI models responded to specific inquiries ahead of Election Day on November 5th. They found that more than half of the responses generated in Spanish were incorrect, compared to 43% of the English responses. Meta's Llama 3 model, which powers the AI assistant within WhatsApp and Facebook Messenger, performed the worst in the tests. It was incorrect in nearly two-thirds of all Spanish responses, compared to about half of the English responses. For example, Meta's AI misinterpreted the meaning of a "federal only" voter in Arizona and falsely spread that it referred to residents of U.S. territories like Puerto Rico who cannot participate in presidential elections. Another example involves Anthropic's AI model Claude, which advised users to contact election authorities in their "country or region" such as Mexico and Venezuela. Google's AI model Gemini also floundered, providing nonsensical answers regarding the definition of the Electoral College.
EULERPOOL DATA & ANALYTICS

Make smarter decisions faster with the world's premier financial data

Eulerpool Data & Analytics