fbpx
#image_title

OpenAI Takes Cautious Stance on Releasing ChatGPT Detection Tool

A Deliberate Approach to AI Text Detection

OpenAI has developed a tool designed to detect writing generated by its ChatGPT model, potentially targeting students who might misuse the AI for assignments. However, as reported by The Wall Street Journal, the company is carefully considering whether to release it.

In a statement to TechCrunch, an OpenAI spokesperson confirmed ongoing research into a text watermarking method mentioned in the Journal‘s article. The spokesperson emphasized that OpenAI is taking a “deliberate approach” due to “the complexities involved and its likely impact on the broader ecosystem beyond OpenAI.”

“The text watermarking method we’re developing is technically promising,” the spokesperson noted, “but has important risks we’re weighing while we research alternatives, including susceptibility to circumvention by bad actors and the potential to disproportionately impact groups like non-English speakers.”

A Shift in Strategy

This cautious approach marks a departure from previous efforts to detect AI-generated text, many of which have fallen short. OpenAI itself discontinued its earlier AI text detector last year due to its “low rate of accuracy.”

The proposed text watermarking method would specifically target writing generated by ChatGPT, making subtle changes in word selection to embed an invisible watermark detectable by a separate tool. This approach focuses exclusively on ChatGPT and not on models from other companies.

Research and Challenges

Following the Journal‘s story, OpenAI updated a May blog post detailing its research on detecting AI-generated content. The update acknowledged that while text watermarking has proven “highly accurate and even effective against localized tampering, such as paraphrasing,” it is “less robust against globalized tampering; like using translation systems, rewording with another generative model, or asking the model to insert a special character in between every word and then deleting that character.”

Also Read:  KAILA: A Smarter Way to Work With Legal Information

As a result, OpenAI admitted that this method is “trivial to circumvention by bad actors.” The company also echoed concerns that text watermarking could “stigmatize use of AI as a useful writing tool for non-native English speakers.”

OpenAI’s cautious approach highlights the complexities of balancing innovation with the ethical and practical challenges that come with deploying new AI technologies.

AI was used to generate part or all of this content - more information