OpenAI Releases Tool To Detect AI-Written Content

1st Feb 2023

Original source:

OpenAI, the AI research firm behind ChatGPT, has released a new tool to distinguish between AI-generated and human-generated text.

Even though it’s impossible to detect AI-written text with 100% accuracy, OpenAI believes its new tool can help to mitigate false claims that humans wrote AI-generated content.

In an announcement, OpenAI says its new AI Text Classifier can limit the ability to run automated misinformation campaigns, use AI tools for academic fraud, and impersonate humans with chatbots.

When tested on a set of English texts, the tool could correctly say if the text was written by AI 26% of the time. But it also wrongly thought that human-written text was written by AI 9% of the time.

OpenAI says its tool works better the longer the text is, which could be why it requires a minimum of 1,000 characters to run a test.

Other limitations of the new OpenAI Text Classifier include the following:

  • Can mislabel both AI-generated and human-written text.
  • AI-generated text can evade the classifier with minor edits.
  • Can get things wrong with text written by children and on text not in English because it was primarily trained on English content written by adults.

With that in mind, let’s look at how it performs.

Using OpenAI’s AI Text Classifier

The AI Text Classifier from OpenAI is simple to use.

Log in, paste the text you want to test, and hit the submit button.

The tool will rate the likelihood that AI generated the text you submitted. Results range from the following:

  • Very unlikely
  • Unlikely
  • Unclear if it is
  • Possibly
  • Likely

I tested it by asking ChatGPT to write an essay about SEO, then submitting the text verbatim to the AI Text Classifier.

It rated the ChatGPT-generated essay as possibly generated by AI, which is a strong but uncertain indicator.


× WhatsApp