The tech giants Google, Meta, Microsoft, and OpenAI are facing high-stakes lawsuits over their practices of “scraping” data from public databases to train their large language models. This controversial practice has brought the legality of artificial intelligence (AI) data gathering into a legal gray area. Privacy and intellectual property experts anticipate that these lawsuits will pave the way for defining boundaries and regulations for AI usage.
Recently, individuals, including artists, authors, and entertainers, have taken legal action against AI companies, alleging violations of privacy rights and intellectual property. Notably, comedian Sarah Silverman sued OpenAI after discovering that the company’s ChatGPT chatbot produced a detailed synopsis of her memoir without her consent.
Unlike past cases involving larger companies protecting user data from smaller ones, these new lawsuits put users and their rights directly in the spotlight. Consequently, courts face novel challenges in determining the outcomes.
A prominent case like Silverman’s lawsuit against OpenAI has shone a public light on the issue of copyrighted works being consumed by large language models during scraping. Experts are divided over whether AI tools like ChatGPT infringe on copyright by “reading” pirated content or using numerous customer reviews to generate content.
Lance Eliot, an AI expert, suggests that AI companies may have better odds defending text-to-text AI tools against copyright infringement claims than audio and visual forms of AI. This is because copyright laws primarily protect the text itself and not ideas, which gets complicated with multimodal AI combining text prompts to produce images or audio.
Privacy advocates like Wayne Chang express concern over the potential privacy impact of AI and large language models. He warns that AI-driven personalized spam and data manipulation could be even more intrusive than the Cambridge Analytica scandal. This could lead to heightened litigation over privacy rights, as seen with Illinois becoming a hotspot for lawsuits related to the Biometric Information Privacy Act.
As these AI lawsuits unfold, companies must pay close attention to their potential legal liabilities, even if they are not currently using AI technology. Introducing AI into workflows without proper awareness of evolving legal landscapes may lead to inadvertent violations of new statutes.
Moving forward, these landmark cases will be instrumental in shaping the legal landscape surrounding AI usage, emphasizing the need for balance between innovation and respecting privacy and intellectual property rights.