Anyone lured by AI-powered chatbots like ChatGPT and Bard can write essays and recipes. — Ultimately, we encounter so-called hallucinations, where artificial intelligence tends to fabricate information.
A chatbot that guesses what to say based on information it gathers from across the internet can’t help but get things wrong. And when it fails, like by publishing a cake recipe with highly inaccurate flour measurements, it can get really messy.
But as mainstream technology tools continue to integrate AI, it’s important to understand how it can be used to serve us. After testing dozens of AI products over the past two months, we’ve come to the conclusion that most of us use technology in suboptimal ways, largely due to poor guidance from tech companies. reached.
Chatbots are most useless when you ask them questions and expect their self-made answers to be true, and that’s how chatbots are designed. But when instructed to use information from trusted sources, such as trusted websites or research papers, AI can perform useful tasks with great accuracy.
“Give them the right information and they can do interesting things with it,” says Sam Heutmaker, founder of AI startup Context. “But 70% of the information we get is not accurate.”
A simple tweak of advising the chatbot to work with specific data generated clear answers and helpful advice. It has transformed me from a cranky AI skeptic to an avid power user in the last few months. I went on a trip using an itinerary I planned with ChatGPT, and it worked well because I got recommendations from my favorite travel site.
Directing chatbots to specific, high-quality sources, such as established news outlet websites and academic publications, can also help reduce the creation and spread of misinformation. Let me share some of the approaches I have used to get help with cooking, research, and travel planning.
Chatbots like ChatGPT and Bard can create recipes that look good in theory but don’t work in practice. In a November experiment by The New York Times Food Desk, an early AI model created recipes for a Thanksgiving menu that included super-dried turkey and rich cakes.
We also encountered overwhelming results with AI-generated seafood recipes. But things changed when I tried his ChatGPT plugin, which is essentially a third-party app that works in tandem with chatbots. (Only subscribers who pay $20/month for access to his ChatGPT4, the latest version of the chatbot, can use the plugin, which can be activated in the settings menu.)
In ChatGPT’s plugin menu, I selected Tasty Recipes, which pulls data from the Tasty website owned by BuzzFeed, a well-known media site. The chatbot was then asked to come up with a meal plan using recipes from the site that included a seafood dish, ground pork, and a side of vegetables. Bott shared inspiring meal plans like lemongrass pork banh mi, grilled tofu tacos, and pasta you can make with whatever you have in your fridge. Each meal suggestion included a link to Tasty’s recipe.
For recipes from other publications, we used Link Reader, a plugin that pastes web links to generate meal plans using recipes from trusted sites such as Serious Eats. The chatbot pulled data from the site to create a meal plan and directed me to visit the website to read the recipe. It took a lot of work, but it was better than the meal plan the AI thought of.
When researching an article on a popular video game series, I used ChatGPT and Bard to summarize the plot and refresh my memory of past games. They messed up important details about the game’s story and characters.
After testing many other AI tools, we concluded that it is important for research to focus on authoritative sources and quickly double-check the accuracy of data. I finally found a tool that does just that. Humata.AI is a free web app popular among academic researchers and lawyers.
The app uploads a document, such as a PDF, from which a chatbot answers questions about the material with a copy of the document and highlights relevant parts.
In one test, I uploaded a research article I found on PubMed, a government-run scientific literature search engine. This tool produced good summaries of long documents in minutes, a process that would have taken me hours. I read through the highlights to double check that the summary is accurate.
Cyrus Kazivandi, founder of Austin, Texas-based Humata, said he developed the app when he was a researcher at Stanford University and needed help reading thick scientific papers. said. The problem with chatbots like ChatGPT, he said, is that they rely on outdated models of the web, so they may lack relevant context for data.
When a Times travel writer recently asked ChatGPT to plan a trip to Milan, the bot directed her to visit the city center, which is deserted for embarrassing reasons such as an Italian holiday.
I had even better luck when I requested a vacation itinerary for me, my wife and our dog in Mendocino County, CA. Owned by the travel divisions of Thrillist, Vox and The Times, as well as when planning meals.
Within minutes, the chatbot created an itinerary that included dog-friendly restaurants and activities, including trains to farms offering wine and cheese pairings and popular hiking trails. This saved me hours of planning. And most importantly the dogs had a great time.
OpenAI, which works closely with Google and Microsoft, says it’s working to reduce hallucinations in chatbots, but it’s already reaping the benefits of AI by controlling the data bots rely on to find answers. can.
Put another way, the main advantage of training machines with huge datasets is that they will be able to use language to simulate human reasoning, says venture capitalists investing in AI companies. Nathan Benaitch said. A key step for us, he said, is to combine that capability with quality information.