fbpx

Trump Posts AI-Generated Taylor Swift Endorsement: Legal and Ethical Implications

AI and Political Manipulation

Over the weekend, former President Donald Trump posted a series of AI-generated images on social media, including a false endorsement from pop star Taylor Swift, as part of his campaign for the upcoming presidential election. These posts highlight the growing concern over the use of generative AI in political campaigns and the challenges in regulating AI-generated disinformation.

The AI-Generated Content

One of the images Trump shared appears to show Vice President Kamala Harris addressing a crowd in Chicago, where the Democratic National Convention is being held. The image, which features a communist hammer and sickle in the background, is likely AI-generated. Another post features a series of images, including what seems to be an AI-created image of Taylor Swift dressed as Uncle Sam, accompanied by the caption, “Taylor wants you to vote for Donald Trump.” Trump’s caption for the post was simple: “I accept!”

Legal and Ethical Concerns

According to Robert Weissman, co-president of Public Citizen, Trump’s posts may not violate the growing list of state laws designed to combat election deepfakes. These laws, enacted in about 20 states, generally target AI-generated images or videos that convincingly depict someone doing or saying something they never did. “It actually has to be plausible,” Weissman explains, suggesting that Trump’s exaggerated images might not meet this threshold.

On the federal level, the regulation of AI-generated content in elections is limited. “There are no federal restrictions on the use of deepfakes,” Weissman notes, with the exception of the Federal Communications Commission’s ban on AI-generated voices in robocalls. Public Citizen has been pushing the Federal Election Commission to impose stricter rules on the use of AI by candidates, but these would likely not cover the more exaggerated or satirical content, such as the Swift image.

Potential Legal Action by Taylor Swift

While Trump’s posts may skirt election laws, Taylor Swift might have a legal case concerning the use of her likeness without permission. Weissman suggests that Swift could potentially pursue a claim under California’s Right of Publicity law, which protects individuals from unauthorized commercial use of their image or likeness. Universal Music Group, which represents Swift, has yet to comment on the situation, and the Trump campaign has not responded to requests for comment.

The Role of Social Media Platforms

The responsibility to regulate misleading AI-generated content may fall to private platforms rather than the government. For instance, X (formerly Twitter) has a synthetic and manipulated media policy that prohibits posts that could deceive or confuse users and lead to harm. However, enforcement of this policy has been inconsistent. Even Elon Musk, the owner of X, has shared content that could violate this policy, such as a deepfake video of Kamala Harris that was not clearly labeled as parody. Meanwhile, Trump’s platform, Truth Social, has minimal guidelines in place to regulate such content.

The Broader Impact on Democracy

Weissman warns that the spread of AI-generated disinformation poses a significant threat to democratic processes. “It’s convenient for Trump, who was going around calling everything fake before AI, and wants us to call true things fake — like Harris’ crowds — to spread AI garbage to undermine the very idea of authenticity, and even reality in some ways,” he says. The ability to distort reality through AI could erode public trust and make it increasingly difficult for people to believe what they see and hear, which is fundamental to a functioning democracy.

AI was used to generate part or all of this content - more information