OpenAI releases software to detect AI-written textual content

0
327
OpenAI releases software to detect AI-written textual content


OpenAI releases software to detect AI-written textual content

OpenAI has launched an AI textual content classifier that makes an attempt to detect whether or not enter content material was generated utilizing synthetic intelligence instruments like ChatGPT.

“The AI Text Classifier is a fine-tuned GPT mannequin that predicts how possible it’s {that a} piece of textual content was generated by AI from a wide range of sources, equivalent to ChatGPT,” explains a brand new OpenAI weblog put up.

OpenAI launched the software right now after quite a few universities and Ok-12 faculty districts banned the corporate’s common ChatGPT AI chatbot on account of its capability to finish college students’ homework, equivalent to writing ebook reviews and essays, and even ending programming assignments.

According to EnterpriseInsider, ChatGPT is banned in NYC, Seattle, Los Angeles, and Baltimore Ok-12 public faculty districts, with universities in France and India additionally banning the platform from faculty computer systems.

BleepingComputer examined OpenAI’s new AI textual content classifier and, for probably the most half, discovered it to be pretty inconclusive.

When testing OpenAI’s AI textual content classifier in opposition to most of our personal content material, it accurately decided a human wrote our articles. 

OpenAI's AI text classifier response for BleepingComputer content
OpenAI’s AI textual content classifier response for BleepingComputer content material

However, when analyzing content material generated by ChatGPT and You.com’s AI chatbot, it had loads of difficulties detecting if the textual content was AI-generated.

As educators will possible use the brand new AI Text Classifiers to verify if college students cheated on their homework assignments, OpenAI warns that it shouldn’t be used because the “sole piece of proof” for figuring out tutorial dishonesty.

“Our classifier is just not totally dependable,” warns OpenAI.

“In our evaluations on a ‘problem set’ of English texts, our classifier accurately identifies 26% of AI-written textual content (true positives) as ‘possible AI-written,’ whereas incorrectly labeling human-written textual content as AI-written 9% of the time (false positives).”

“Our classifier’s reliability usually improves because the size of the enter textual content will increase”

The classifier’s success will possible enhance as time passes and is skilled with additional knowledge. For now, although, it’s not a dependable software for detecting AI-generated content material.

LEAVE A REPLY

Please enter your comment!
Please enter your name here