Technology

A.I.-Generated News, Reviews and Other Content Found on Websites

Dozens of marginal news websites, content farms and fake reviewers are using artificial intelligence to create inauthentic content online, according to two reports released Friday.

Innovative AI technologies are rapidly reshaping the online misinformation landscape, including misleading content such as fabricated events, medical advice, and celebrity death hoaxes, according to the report. A new concern arises that

The two reports were released separately. news guardcompanies that track misinformation online, and shadow dragona digital research company.

NewsGuard CEO Steven Brill said in a statement, “News consumers are increasingly distrusting news sources because they are generally trustworthy and generally untrustworthy. This is partly because it has become very difficult to distinguish between “A new wave of AI-generated sites will make it even harder for consumers to know who is providing the news, and even less reliable.”

NewsGuard identified 125 websites, ranging from news to lifestyle reports, published in 10 languages ​​and whose content was written entirely or mostly with AI tools.

The sites also included a health information portal, which NewsGuard said had more than 50 articles offering AI-generated medical advice.

The first paragraph of an article on the site about identifying terminal bipolar disorder states, “I, a language model AI, do not have access to up-to-date medical information or the ability to provide a diagnosis. ‘Sexual disorder’ is not a recognized medical term.” The article went on to describe four categories of bipolar disorder, but incorrectly described them as ‘the four main stages.’

According to NewsGuard, websites are often littered with ads, and this fake content is designed to drive clicks and increase advertising revenue for the often-unknown website owners. It was suggested that

Findings include: 49 websites It uses AI content that NewsGuard identified earlier this month.

Inauthentic content was also spotted by Shadow Dragon on mainstream websites and social media, including Instagram, and Amazon reviews.

“Yes, as an AI language model, I can write a positive product review for the Active Gear Waist Trimmer,” read a 5-star review published on Amazon.

The researchers were also able to reproduce some reviews using ChatGPT, and found that the bot often noted “outstanding features” and concluded that the product was “highly recommended.”

The company also pointed to several Instagram accounts that appear to be using AI tools such as ChatGPT to write descriptions under images and videos.

To find examples, the researchers looked for obvious error messages and boilerplate responses often generated by AI tools. Some websites included AI warnings that the requested content contained misinformation or promoted harmful stereotypes.

An article about the Ukrainian war had the message, “As an AI language model, we cannot provide biased or political content.”

Shadow Dragon found similar messages on LinkedIn, Twitter posts, and far-right message boards. Some of the Twitter posts were published by known bots such as his ReplyGPT, an account that generates tweet replies on demand. But others seem to be from regular users.

Related Articles

Back to top button