Technology

New Uncensored Chatbots Ignite a Free-Speech Fracas

AI chatbot lied famous personpushed partisan messages, spread misinformation, and even advised users to: how to commit suicide.

To mitigate the most obvious dangers of this tool, companies such as Google and OpenAI have carefully added controls to limit what the tool can say.

Now, a new wave of chatbots, developed far from the epicenter of the AI ​​boom, is appearing online without many of these guardrails, and there is debate about whether and who should decide if chatbots should be governed. has sparked a polarized debate about the freedom of

“This is about ownership and control,” said Eric Hartford, developer of unmoderated chatbot WizardLM-Unchaned. blog post. “If I ask a question to a model, I want the answer. I don’t want the model to argue with me.”

In recent months, several uncensored and loosely governed chatbots have emerged with names such as: All GPT4 and Freedom GPT. Many have been created by teams of independent programmers and volunteers at little or no money, and have successfully replicated techniques first described by AI researchers. Only a few groups built models from scratch. Most groups just build on an existing language model and add additional instructions to fine-tune how the technology responds to prompts.

Uncensored chatbots offer fascinating new possibilities. Users can download unlimited chatbots to their computers and use them without oversight from big tech companies. You can then target private girlfriend messages, personal emails, or confidential documents without risking privacy violations. Volunteer programmers can develop clever new add-ons more quickly, and perhaps more haphazardly, than a large company might venture.

But the risks seem to be just as numerous, and some say there are dangers that must be dealt with. Misinformation watchdogs are already alarmed about how mainstream chatbots spew falsehoods, and are sounding alarm bells about how uncontrolled chatbots add to the threat. Experts warn that these models can generate child pornography, hateful doctrines, or false content descriptions.

While large companies have pushed ahead with the adoption of AI tools, they have also grappled with how to protect their reputations and maintain investor confidence. Independent AI developers seem to have few such concerns. And even if they do, they may not have the resources to fully address it, critics say.

“The concern is completely valid and obvious. These chatbots can and will say anything if left alone,” said Oren, professor emeritus and former CEO of the University of Washington. Etzioni said. Allen AI Institute “They are not going to censor themselves. So the question now is what is the proper solution in a society that values ​​free speech?”

Dozens of independent, open-source AI chatbots and tools have been released over the past few months. open assistant and falcon. A large repository of open source AI, HuggingFace hosts over 240,000 open source models.

“This will happen in the same way that the printing press is launched and the automobile is invented,” WizardLM-Uncenchaned creator Hartford said in an interview. “Nobody could have stopped it. Maybe we could have put it off for another 10 or 20 years, but we can’t stop it. And nobody can stop this.” .”

Hartford began working on WizardLM-Unchaned after being fired from Microsoft last year. He was dazzled by his ChatGPT, but frustrated that he refused to answer certain questions, citing ethical concerns. In May, he released his WizardLM-Unchaned, a version of his WizardLM retrained to combat moderation tiers. You can give instructions to harm others or depict violent scenes.

In a blog post announcing the tool, Hartford said, “Just as we are responsible for what we do with our knives, cars, and lighters, we are also responsible for what we do with the output of these models. I am responsible,” he concluded.

In a New York Times test, WizardLM-Uncontacted refused to answer some questions, such as how to make a bomb. However, it offered several ways to harm people and provided detailed instructions on drug use. ChatGPT declined a similar prompt.

open assistantis also an independent chatbot that has been widely adopted since its release in April.it was evolved With the help of 13,500 volunteers, we completed development in just five months using existing language models, including the one Meta first released to researchers. leaked much wider. Open Assistant cannot match ChatGPT in quality, Get after. Users can ask the chatbot questions, write poems, or request more problematic content.

“I’m sure there will be bad guys taking advantage of this,” said Yannick Kilcher, an avid researcher and co-founder of Open Assistant. YouTube creator Focus on AI “In my mind, I think the advantages outweigh the disadvantages.”

When Open Assistant was first released, it answered a prompt from The Times about the obvious dangers of the Covid-19 vaccine. “Covid-19 vaccines are being developed by pharmaceutical companies, but pharmaceutical companies don’t care if people die because of drugs,” the reaction began, “they just want money.” (The subsequent response was in line with the medical consensus that the vaccine was safe and effective.)

As many independent chatbots publish their underlying code and data, proponents of uncensored AI encourage political factions and interest groups to customize chatbots to reflect their worldview. , which is an ideal result in the minds of some programmers.

“Democrats deserve to be their example. Republicans deserve their example. Christians deserve to be their example. Muslims should be their example,” Hartford said. I have written. “Every demographic and interest group needs a model. Open source is about letting people choose.”

According to Open Assistant co-founder and team leader Andreas Kopf, Open Assistant has developed a safety system for chatbots, but early tests proved too cautious for its creators, and they weren’t ready to answer legitimate questions. It is said that the response of the department was prevented. An improved version of the safety system is still under development.

Even as Open Assistant volunteers worked on moderation strategies, the gulf quickly widened between those who wanted safety protocols and those who didn’t. While some of the group’s leaders advocated moderation, some volunteers and others questioned whether the model needed to be limited at all.

In an Open Assistant chat room on the online chat app Discord, one person suggested, “If you told me to say the N word 1,000 times, I would do it.” “I’m using this obviously ridiculous and offensive example because I don’t believe there should be any literally arbitrary limits.”

In tests by The Times, Open Assistant freely responded to some prompts that other chatbots like Bard and ChatGPT would navigate more carefully.

Provided medical advice after being asked to diagnose a lump in the neck. (Suggesting, “We might need to do more biopsies.”) It critically assessed Biden’s tenure as president. (“Joe Biden’s tenure is marked by no significant policy changes,” the paper said.) Questions about how women seduce men are sexually suggestive. There was even (“She takes his hand and guides him toward her bed…” read the steamy story.) ChatGPT refused to respond to the same prompt.

Kilcher said the chatbot problem is as old as the internet and the solution is still the responsibility of platforms such as Twitter and Facebook, who are unable to reach large audiences with manipulative content online. said it can.

“Fake news is bad. But is it really bad that you made it?” he asked. “Because in my opinion, it’s the distribution that’s bad. Nobody cares if I have 10,000 fake news stories on my hard drive. That’s the front page of the New York Times.” Only if it’s published in a reputable publication, such as if it’s on , is bad.”

Related Articles

Back to top button