OpenAI, the company behind the popular generative AI platform ChatGPT, is now facing a class-action lawsuit in California that accuses it of unlawfully utilizing private information for training its AI model.
The lawsuit, filed on June 28, alleges that OpenAI trained ChatGPT using the private data of millions of individuals without obtaining their explicit consent. The plaintiffs claim that the company collected data from various sources, including blog posts, social media comments, and even online recipes. According to the court filing, this data collection without consent puts the plaintiffs and the affected classes at an unacceptable level of risk, violating responsible data protection practices.
The class-action lawsuit is being handled by the Clarkson Law Firm, with five OpenAI-related entities named as defendants. Notably, Microsoft Corporation, an early investor in OpenAI, has also been implicated as a defendant, and the plaintiffs are demanding a jury trial.
The plaintiffs argue that OpenAI’s misuse of data constitutes violations of several laws, including the Electronic Communication Privacy Act, Computer Fraud and Abuse Act, California Invasion of Privacy Act, and Illinois’ Biometric Information Privacy Act. Furthermore, the lawsuit claims that OpenAI’s actions demonstrate negligence, invasion of privacy, unjust enrichment, failure to warn, and conversion. In addition to obtaining data without consent, the plaintiffs assert that OpenAI knowingly designed ChatGPT to be inappropriate for children and deceptively tracked children’s activities without their consent.
This lawsuit comes at a time when AI regulations are being intensely debated worldwide. Governments are striving to establish new regulations to ensure the responsible and safe use of AI technology. The European Union (EU) is finalizing its AI Act, while other regions are conducting public consultations to determine appropriate regulatory approaches.
Amidst this regulatory landscape, consumer groups are calling for governments to expedite the implementation of regulations, citing the significant risks posed by AI in various domains such as healthcare, finance, mass media, and Web3. Some advocates are even urging for a temporary moratorium on AI development until robust safeguards are established by regulatory bodies.
As the legal battle unfolds, it highlights the growing importance of safeguarding personal data, obtaining proper consent, and striking the right balance between innovation and privacy protection in the rapidly evolving field of AI.