fbpx

Grok Spreads Misinformation About Kamala Harris, Secretaries of State Urge Action from Musk

Grok’s False Claims and the Response

Grok, an AI-powered chatbot on X (formerly known as Twitter), has recently come under fire for spreading false information about Vice President Kamala Harris’s eligibility for the 2024 U.S. presidential ballot. An open letter, penned by five state secretaries and addressed to Elon Musk—CEO of Tesla, SpaceX, and X—claims that Grok incorrectly suggested that Harris was ineligible to appear on the ballot in several states.

The letter, led by Minnesota Secretary of State Steve Simon and co-signed by his counterparts from Pennsylvania, Washington, Michigan, and New Mexico, calls on Musk to “immediately implement changes to X’s AI search assistant, Grok, to ensure voters have accurate information in this critical election year.”

The Spread of Misinformation

The controversy began on July 21, shortly after President Joe Biden announced that he would suspend his presidential bid. Grok started responding to questions about Harris’s eligibility with the misleading claim that ballot deadlines had passed in nine states, including Michigan, Minnesota, and Pennsylvania. In reality, no such deadlines had passed.

The false information quickly spread, reaching millions of users on X before the error was corrected on July 31. “While Grok is only available to X Premium and Premium+ subscribers and includes a disclaimer asking users to verify information, the false information about ballot deadlines has been captured and shared repeatedly in multiple posts,” the secretaries of state highlighted in their letter.

Musk’s Role and the Need for Accountability

Elon Musk has faced significant criticism for how X handles the moderation of political content, with Grok’s recent actions only intensifying the scrutiny. X is reported to have fewer moderation staff compared to other platforms, following Musk’s drastic reduction of the company’s trust and safety team by an estimated 80%.

Also Read:  Amplifying Legal Practice: A Glimpse into Clio Duo’s AI Integration

Earlier this year, X promised to establish a new trust and safety center in Austin, Texas, but reportedly hired far fewer moderators than initially planned. This reduction in oversight has raised concerns about the platform’s ability to manage misinformation, particularly in the politically charged lead-up to the 2024 election.

Musk’s own actions have not helped his case. Last Friday, he reshared a video using AI to clone Harris’s voice, making her appear to admit to being a “diversity hire” and claiming she “doesn’t know the first thing about running the country.” Musk’s post, which seemed to violate his platform’s guidelines, only added fuel to the fire.

The Broader Implications

The spread of misinformation on platforms like X highlights the urgent need for robust content moderation, especially as AI tools become more prevalent. As the 2024 election approaches, the actions taken—or not taken—by social media companies like X could have significant consequences for public discourse and voter trust.

The secretaries of state’s letter underscores the importance of ensuring that voters receive accurate information, calling on Musk to take immediate corrective action. Whether Musk will heed their call remains to be seen, but the pressure is mounting for X to address the flaws in its AI-powered systems before they can do further harm.

AI was used to generate part or all of this content - more information