In a world where the lines between fact and fiction are increasingly blurred, the rise of deepfake technology presents a new set of challenges for lawmakers. Deepfakes, powered by generative artificial intelligence (AI), allow the creation of seemingly authentic textual, visual, and audio content, leading to the potential manipulation of reality and a loss of confidence in objective evidence.
Recently, a bill named S-3926 was introduced in the Legislature by Democratic Sen. Brian Stack and Republican Sen. Douglas Steinhardt, with Republican Sen. Kristin Corrado as co-sponsor. The bill aims to combat the problems posed by deepfakes by extending the crime of identity theft to include fraudulent impersonation through AI or “deepfake” technology. Although it has bipartisan support and has been referred to the Senate Judiciary Committee, it raises significant concerns about First Amendment rights.
One of the primary challenges lies in crafting a legally sustainable definition of forbidden speech content that can be excluded from First Amendment protections. The bill’s operative text attempts to define “False personation records,” but in doing so, it faces inherent difficulties. Categorizing and determining what constitutes “reasonable” or “unreasonable” deepfakes infringes on the principles of free expression, leading to subjective judgments and potential biases in decision-making.
The bill’s use of the “reasonable person” standard as a measure of truth raises questions about cultural and social backgrounds that influence individual perceptions. What may be seen as parody or genuine depiction can vary greatly depending on the viewer’s background and biases. Attempting to aggregate and mediate these diverse viewpoints into a single “reasonable person” standard becomes an impractical and subjective task.
Moreover, evaluating whether “societal harm” is “substantially likely” due to deepfake alterations in public policy debates or elections is an issue riddled with subjectivity. Deepfakes are designed to influence political discourse and opinion, making the distinction between societal harm and legitimate political debate a matter of personal opinion rather than objective fact.
A recent example involving Presidential candidate and Florida Gov. Ron DeSantis exemplifies the challenges of applying such a standard. A campaign video that included a mixture of genuine and artificially generated images sparked debate over the video’s intentions. While the video conveyed a political opinion protected by the First Amendment, the inclusion of deepfakes could potentially fall under the scope of the bill, leading to felony charges based on subjective interpretations.
While existing laws can handle instances of intentional fraud or defamation, attempting to criminalize AI deception in political discourse raises significant constitutional concerns. It is essential to strike a balance between protecting the public from malicious deepfakes while upholding First Amendment rights and preserving the free exchange of ideas.
The implications of S-3926 and similar legislation should be carefully considered to ensure that any legal regulations do not inadvertently curtail freedom of expression or stifle political discourse. Deepfakes may pose challenges, but the solution must be found within the boundaries of constitutional principles and the protection of individuals’ rights to free speech.