Breaking News

US firm releases fake-news writing software, scientists unveil detector

Kindly Share This Story:

OpenAI, the artificial intelligence firm that Elon Musk founded then later departed, just released a stronger version of its “conversational” text-writing AI system, according to Futurism.

Futurism also reported that a team of scientists from the MIT-IBM Watson AI Lab and Harvard University built an algorithm called GLTR that determines how likely it is that any particular passage of text was written by a tool like the OpenAI software.

When OpenAI first released the algorithm, dubbed GPT-2, back in February, the company declared that it was too dangerous to release to the public, instead opting to share a watered-down version. Now, OpenAI announced that it’s sharing a new version that’s six times as robust as the original — while keeping an eye out to make sure people don’t misuse it.

OpenAI expressed concerns that its AI could be used to flood the internet with fake news and propaganda. The first model had notable flaws and telltale signs that its output was machine-written. This software is at least more coherent, though AI expert Janelle Shane pointed out that it is still prone to write nonsense while she was kicking the tires on Twitter.

According to OpenAI’s announcement, there’s a yet-more-powerful version of GPT-2 sitting behind locked doors. The company says that it plans to release the model within a few months, but that it may not if it determines that people are using the new and improved GPT-2 maliciously.

READ ALSO: Musk says in talks with Saudis, others on taking Tesla private

When OpenAI unveiled GPT-2, they showed how it could be used to write fictitious-yet-convincing news articles by sharing one that the algorithm had written about scientists who discovered unicorns.

GLTR uses the exact same models to read the final output and predict whether it was written by a human or GPT-2. Just like GPT-2 writes sentences by predicting which words ought to follow each other, GLTR determines whether a sentence uses the word that the fake news-writing bot would have selected.

“We make the assumption that computer-generated text fools humans by sticking to the most likely words at each position, a trick that fools humans,” the scientists behind GLTR wrote in their blog post.

ALSO READ: Nigeria’s economy has collapsed under Buhari — Secondus

“In contrast, natural writing actually more frequently selects unpredictable words that make sense to the domain. That means that we can detect whether a text actually looks too likely to be from a human writer!”

The IBM, MIT, and Harvard scientists behind the project built a website that lets people test GLTR for themselves. The tool highlights words in different colours based on how likely they are to have been written by an algorithm like GPT-2 — green means the sentence matches GPT-2, and shades of yellow, red, and especially purple indicate that a human probably wrote them.

Kindly Share This Story:
All rights reserved. This material and any other digital content on this platform may not be reproduced, published, broadcast, written or distributed in full or in part, without written permission from VANGUARD NEWS.


Comments expressed here do not reflect the opinions of vanguard newspapers or any employee thereof.
Do NOT follow this link or you will be banned from the site!