Technology

ChatGPT maker OpenAI comes up with a way to check if text was written by a human

Published

on

Sam Altman, CEO of OpenAI, walks from lunch during the Allen & Company Sun Valley Conference on July 6, 2022, in Sun Valley, Idaho.

Kevin Dietsch | Getty Images News | Getty Images

Artificial intelligence research startup OpenAI on Tuesday introduced a tool that’s designed to figure out if text is human-generated or written by a computer.

The release comes two months after OpenAI captured the public’s attention when it introduced ChatGPT, a chatbot that generates text that might seem to have been written by a person in response to a person’s prompt. Following the wave of attention, last week Microsoft announced a multibillion-dollar investment in OpenAI and said it would incorporate the startup’s AI models into its products for consumers and businesses.

related investing news

Schools were quick to limit ChatGPT’s use over concerns the software could hurt learning. Sam Altman, OpenAI’s CEO, said education has changed in the past after technology such as calculators has emerged, but he also said there could be ways for the company to help teachers spot text written by AI.

OpenAI’s new tool can make mistakes and is a work in progress, company employees Jan Hendrik Kirchner, Lama Ahmad, Scott Aaronson and Jan Leike wrote in a blog post, noting that OpenAI would like feedback on the classifier from parents and teachers.

“In our evaluations on a ‘challenge set’ of English texts, our classifier correctly identifies 26% of AI-written text (true positives) as ‘likely AI-written,’ while incorrectly labeling human-written text as AI-written 9% of the time (false positives),” the OpenAI employees wrote.

This isn’t the first effort to figure out if text came from a machine. Princeton University student Edward Tian earlier this month announced a tool called GPTZero, noting on the tool’s website that it was made for educators. OpenAI itself issued a detector in 2019 alngside a large language model, or LLM, that’s less sophisticated than what’s at the core of ChatGPT. The new version is more prepared to handle text from recent AI systems, the employees wrote.

The new tool is not strong at analyzing inputs containing fewer than 1,000 characters, and OpenAI doesn’t recommend using it on languages other than English. Plus, text from AI can be updated slightly to keep the classifier from correctly determining that it’s not mainly the work of a human, the employees wrote.

Even back in 2019, OpenAI made clear that identifying synthetic text is no easy task. It intends to keep pursuing the challenge.

“Our work on the detection of AI-generated text will continue, and we hope to share improved methods in the future,” Hendrik Kirchner, Ahmad, Aaronson and Leike wrote.

WATCH: China’s Baidu developing AI-powered chatbot to rival OpenAI, report says

Trending

Exit mobile version