Sen. Elizabeth Warren (D-MA) and Rep. Lori Trahan (D-MA) are demanding answers regarding how OpenAI handles whistleblowers and safety reviews, after former employees claimed that internal criticism is frequently suppressed.
“Given the disparity between your public comments and reports of OpenAI’s actions, we request information about OpenAI’s whistleblower and conflict of interest protections in order to determine whether federal intervention is required,” Warren and Trahan wrote in a letter shared exclusively with The Verge.
The MPs noted multiple cases in which OpenAI’s safety protocols were called into doubt. For example, they stated that in 2022, an unreleased version of GPT-4 was being tested in a new version of Microsoft’s Bing search engine in India before being approved by OpenAI’s safety board. They also recalled OpenAI CEO Sam Altman’s temporary departure from the business in 2023 due to the board’s worries, which included “overcommercializing advances before understanding the consequences.”
Warren and Trahan’s letter to Altman comes as the corporation is plagued by a laundry list of safety concerns, many of which contradict the company’s public pronouncements. For example, an unnamed source claimed The Washington Post that OpenAI hurried through safety tests, the Superalignment team (which was partially responsible for safety) was disbanded, and a safety executive left, stating that “safety culture and processes have taken a backseat to shiny products.” Lindsey Held, an OpenAI spokesperson, rejected the charges made in The Washington Post’s piece, claiming the business “didn’t cut corners on our safety process, though we recognize the launch was stressful for our teams.”
Other politicians have inquired into the company’s safety standards, including a group of senators led by Brian Schatz (D-HI) in July. Warren and Trahan requested additional information on OpenAI’s reactions to that group, including the establishment of a new “Integrity Line” for staff to express issues.
Meanwhile, OpenAI appears to be taking the offensive. In July, the business established a collaboration with Los Alamos National Laboratory to investigate how sophisticated AI models might securely support bioscientific research. Altman revealed via X this week that OpenAI is partnering with the US Artificial Intelligence Safety Institute, emphasizing that 20% of the company’s processing resources will be dedicated to safety (a promise made to the now-defunct Superalignment team). In the same post, Altman stated that OpenAI had removed nondisparagement rules for staff as well as provisions allowing the cancellation of vested shares, which was a crucial issue in Warren and Trahan’s letter.
The letter indicates a critical policy priority for the lawmakers, who have previously introduced laws to strengthen whistleblower protections, such as the FTC Whistleblower Act and the SEC Whistleblower Reform Act. It might also send a signal to law enforcement agencies, which have reportedly focused on OpenAI’s potential antitrust violations and detrimental data practices.
Warren and Trahan asked Altman to explain how its new AI safety hotline for employees is being used and how the firm responds to issues. They also requested “a detailed accounting” of all the instances OpenAI products “bypassed safety protocols” and under what conditions a product would be allowed to skip a safety evaluation. The MPs are also looking for details on OpenAI’s conflict policy. They questioned Altman if he was compelled to sell from any outside holdings and “what specific protections are in place to protect OpenAI from your financial conflicts of interest.” They requested Altman to respond by August 22nd.
Warren also cites Altman’s outspokenness about his fears about artificial intelligence. Last year, Altman cautioned before the Senate that AI’s capabilities may be “significantly destabilizing for public safety and national security” and stressed the impossibility of foresee every potential abuse or failure of the technology. These warnings appeared to connect with lawmakers; in OpenAI’s home state of California, state Sen. Scott Wiener is working for legislation to regulate massive language models, including provisions that would hold businesses legally liable if their AI is used in negative ways.
The original article is available at: https://www.theverge.com/2024/8/8/24216094/openai-sam-altman-warren-trahan-whistleblowers-safety-reviews.