As the Paris Summer Olympics kick off, one of the most contentious topics is the rollout of advanced AI surveillance technology. This system, using machine learning to analyze video footage in real-time, aims to detect and predict threats. The city’s video cameras will monitor millions of visitors, identifying weapons and unusual movements, among other potential threats. Security personnel will then decide whether to notify authorities, including local and national police.
The Controversial AI Surveillance Technology
French lawmakers describe this AI tool as a crucial measure to protect the multi-week event from violence. However, privacy advocates, both in Europe and the US, are raising alarms. “The things that these tools are supposed to achieve are something like those pre-cognitive efforts from that Tom Cruise movie some years ago,” said Ari Ezra Waldman, a law professor at the University of California, Irvine, referencing the 2002 sci-fi film Minority Report.
Privacy advocates argue that the technology threatens civil liberties through potential bias, false positives, and the collection of biometric data. They worry these issues will persist if the technology is adopted at major US events like the 2026 FIFA World Cup, the 2028 Summer Olympics in Los Angeles, and the 2034 Winter Olympics in Salt Lake City.
Balancing Security and Privacy
Waldman acknowledges the heightened security concerns at large-scale events but believes there are better, less invasive tools available. He points out that stadiums already manage crowds effectively with metal detectors, pat-downs, and bag checks. The push for AI surveillance, according to him, is driven by the belief that new technology is inherently better.
The French law permitting this AI surveillance is temporary, set to expire next spring after the 2024 Paralympics. Still, privacy advocates worry that high-profile use of AI-enhanced security could increase demand for mass surveillance beyond the event venues. “They always use these as Trojan horses to try and implement more widespread use of the technologies,” said Ella Jakubowska, head of policy at European Digital Rights.
Concerns Over Privacy and Efficacy
Although the French legislation states that the systems won’t process biometric information, privacy groups are skeptical. European Digital Rights and nearly three dozen other organizations argue that the systems, even without facial recognition, still uniquely identify individuals, triggering protections under the EU’s General Data Protection Regulation.
Laura Lazaro Cabrera from the Center for Democracy & Technology explains that detecting the intended incidents requires biometric data, analyzing individuals’ physiological features and behaviors. Surveillance solutions are often deployed to address emotionally charged issues, despite incomplete evidence of their efficacy. Leila Nashashibi from Fight for the Future cites a study showing predictive-policing software Geolitica has a success rate as low as 0.1% for some crimes. Another study highlighted that automated facial recognition tools disproportionately lead to the arrests of Black people due to biased training data.
Christoph Lütge, director of the Institute for Ethics in AI at the Technical University of Munich, notes that training algorithms must encompass diverse cultural contexts to be effective. For example, jaywalking is not an issue in Germany but might be elsewhere. Catherine Crump, director of UC Berkeley’s Samuelson Law, Technology & Public Policy Clinic, advocates for third-party efficacy assessments for these technologies to ensure transparency and reliability.
US Context and Future Implications
In the US, there is no direct equivalent to the EU’s comprehensive privacy laws. While proposals exist in Congress, AI regulation remains inconsistent. The Biden administration’s October 2023 AI executive order tasks federal agencies with assessing AI technology for health, safety, and security risks. “Whereas in some other countries, the default is that you can’t use technology like video analytics unless you get permission, here the default is that you can use it unless it’s specifically prohibited,” said Crump.
Scylla, a security system supplier, markets AI video analytics to sports venues, including Major League Baseball’s Chicago Cubs. Their technology detects anomalies like slips, falls, and weapons. Kris Greiner, Scylla’s VP of sales, emphasized that their system does not log, store, or collect personal information unless facial recognition is activated by the client. He assured that the AI does not target specific ethnicities, genders, or races.
California’s Robust Privacy Protections
California, hosting the 2028 Summer Olympics and 2026 World Cup venues, leads in consumer privacy and technology regulation. Nicole Ozer from the ACLU of Northern California highlights the state’s constitutional right to privacy. California’s privacy laws, including the California Consumer Privacy Act, grant citizens significant control over their personal information.
This year, California legislators proposed numerous AI-related bills. Recently, a legislative panel approved a measure to ban discrimination by AI tools. Some cities, like San Francisco, have even banned city agencies’ use of facial recognition.
Lasting Impact of Surveillance
Andrew Guthrie Ferguson, a law professor at American University, warns that surveillance technologies introduced for temporary events can have long-lasting impacts. “We build all these things and then they stay. We don’t tear them down because it seems sort of wasteful,” he said. This could erode privacy rights long after the events conclude.
As AI surveillance becomes more prevalent, the debate over balancing security with privacy intensifies. The outcomes of these high-profile implementations will likely shape the future of AI technology and its role in public safety and privacy.