Business

ChatFished: How to Lose Friends and Alienate People With AI

Five hours is enough to watch a Mets game. Enough time to listen to the Spice Girls’ “Spice” album (40 minutes), Paul Simon’s “Paul Simon” album (42 minutes) and Gustav his Mahler Symphony No. 3 (his longest). Enough time to roast a chicken, text a friend that you’ve roasted a chicken, and get ready for an impromptu dinner for his party.

Or you can spend it checking your email. 5 hours is the amount of time employees spend on email each day.and 90 minutes On messaging platform Slack.

Workplace chatter like email and Slack is weird. Also, managing an inbox can be daunting — you may be wondering, can a robot do this?

In late April, I decided to see what it would be like to bring artificial intelligence into my life. I decided to do an experiment. For a week, I wrote all my work communications (emails, Slack messages, pitches, follow-ups with sources) via ChatGPT (laboratory OpenAI’s artificial intelligence language model). I didn’t tell my colleagues until the end of the week (except for a few instances of personal weakness). But most of the time, I wrote ChatGPT with detailed prompts asking if I wanted to be witty or formal depending on the situation.

The result has been a rollercoaster, both emotionally and in terms of the amount of content I was generating. I started the week by flooding my teammates to see how they would react (sorry). At some point, I lost my patience with bots and developed a newfound appreciation for my phone.

Unsurprisingly, my bot was no match for the emotional tone of online conversations. I spend a lot of time talking online because of my hybrid work.

There’s nothing wrong with the urge to chat with your teammates all day long. Most people know the thrill (and usefulness) of workplace friendships. psychologist, economist, TV sitcoms and our own lives. A colleague of mine sends me pictures of babies in increasingly chic onesies every few days. But the amount of time an employee feels he has to spend communicating digitally is undoubtedly excessive. For some, it’s easy to argue for handing over to artificial intelligence.

The release of generative AI tools has raised all sorts of thorny problems at work. There are concerns about which jobs AI will replace in the next decade — paralegals? personal assistants? Film and TV writers are on strike right now, and one of the issues they’re fighting is limiting the use of AI by studios. There are also concerns about the toxic and false information that AI will spread into an online ecosystem that is already rife with misinformation.

The question that drove my experiment was much narrower. If AI takes over the tedium of communication, will we miss the old ways of working? And will my colleagues know it?

My experiment started Monday morning with a friendly Slack message from an editor in Seoul. The editor sent me links to research analyzing humor in over 2,000 of her TED and TEDx Talks. “I feel sorry for the researchers,” the editor wrote to me. When I asked ChatGPT for a smart reply, the robot wrote:

It doesn’t look exactly like the sentence I type, but it didn’t seem offensive. I hit send.

I started experimenting because I felt a generous spirit was important to my robot co-conspirators. I noticed As it happened, a business colleague at his desk was planning a party. One of her party planners, her Renee, has asked if I can help her draft invitations.

“With your journalism voice, you might write better than I do,” Renee wrote to me in Slack.

I couldn’t tell her how painful my use of my “journalistic voice” was that week. She asked ChatGPT to create a funny sentence about snacks. “We are delighted to announce that our next party will feature an array of delicious cheese plates,” Robot wrote. “Maybe some have a business-themed twist to spice it up a bit (pun intended)!”

Renee was unimpressed and ironically wrote me:

During that time, I was exchanging a series of messages with my colleague Ben about the stories we were writing together. In a moment of anxiety, I called him and told him that he was writing the Slack message, not me, but ChatGPT. “I thought I broke you!” he said.

When we hung up, Ben gave me a message.

“I want to assure you that you can sleep peacefully, knowing that your safety and security have not been compromised,” my bot replied.

Given the amount of time I spend online talking to my colleagues about news, story ideas, and sometimes “Love Is Blind,” it was disconcerting to remove the personality from those communications.

But it is by no means an exaggeration. Earlier this year, Microsoft launched a product called Microsoft 365 Copilot that could handle all the tasks I asked his ChatGPT to do. I recently learned that Microsoft’s corporate vice president, Jon Friedman, when he showed me how to read the emails Copilot receives, summarize them, and formulate a possible reply, actually works I saw that Copilot can take notes during meetings, analyze spreadsheet data, and identify potential problems in projects.

I asked Mr. Friedman if the co-pilot could imitate his sense of humor. He told me he could make a brave comic attempt, but the product wasn’t finished yet. (For example, he asked for a pickleball joke and replied:)

Of course, he continued, the co-pilot’s purpose is higher than plain comedy. “Most of humanity spends too much time on what we call the drudgery of work,” Friedman said. “These things just sap our creativity and energy.”

Mr. Friedman recently asked his co-pilot to write a memo and used his memo to recommend one of his employees for a promotion. Recommendation worked. He estimates that two hours’ worth of work was completed in his six minutes.

However, for some, the time savings just aren’t worth the peculiarities of the outsourcing relationship.

“In the future, you’ll get an email and someone will say, ‘Did you read it?’ I didn’t write TikTok about office communication. “The robots will go back and forth to each other and make circles.”

Mr. Buechele asked me without question about the email I had sent him during the phone interview. “Your email his style is very professional,” he said.

I confess that ChatGPT wrote him a message asking for an interview.

“I was like, ‘This is going to be the most awkward conversation of my life,'” he said.

This confirmed the fear I had that my sources were beginning to think of me as a jerk. He wrote me a bombastic email inviting me to visit his office when I was in Los Angeles.

ChatGPT’s response was silent and bordering on rude. “Thank you for your cooperation.”

I mourned the existence of the Internet in the past, studded with exclamation marks. I know people think exclamation points are sticky. Writer Elmore Leonard recommended weighing “two or three per 100,000 words of his prose.” With all due respect, I disagree. Often he uses 2 or 3 words for 2 or 3 words in prose. I am an advocate for digital enthusiasts. ChatGPT turned out to be more discreet.

Despite my frustration with the robot overlords, I found that some of my colleagues were impressed with my new and sophisticated digital persona. Among them was my teammate Jordyn, who consulted me on Wednesday for advice on pitching an article.

“I have a story idea I’d like to talk to you about,” Jordyn wrote to me.

“I’m always ready for a good story, urgent or not!” my robot replied. “Especially when it’s a juicy one with twisted plots and unexpected turns.”

After a few minutes of back and forth, I was desperate to speak to Jordyn in person. I couldn’t put up with the bot’s brooding tone. I missed my silly jokes and my (relatively) normal voice.

Even more surprisingly, ChatGPT is prone to hallucinations. That is, to group together words and ideas that don’t really make sense. While writing a note to the source about the timing of the interview, my bot randomly suggested asking him if he needed to adjust his outfit beforehand so his aura and chakras wouldn’t clash. .

I asked ChatGPT to draft a message telling another colleague who knew about my experiment that I was in hell. ‘ replied the robot. I asked for a draft of a message explaining that I was insane. ChatGPT couldn’t do that either.

Of course, many of the AI ​​experts I spoke with weren’t swayed by the idea of ​​abandoning personalized communication styles. McKinsey partner and generative AI expert Michael Chui said:

Chui acknowledged that some see a dystopian future in which workers communicate primarily through robots. But he argued that this doesn’t look quite like the already stylized corporate exchange: “Recently, a colleague sent me a text of him saying, ‘You’re the last to send.’ Was the email genuine?

It turned out that the email was so formal that my colleague thought it was written via ChatGPT. However, Chui’s case is a little special. In college, his freshman dormitory voted to assign him a visionary superlative. “He will most likely be replaced by a robot of his own making.”

I decided to end the week by asking my deputy editor-in-chief about the role of AI in the future of newsrooms. “Do you think it’s possible one day to have AI-generated content on the front page?” wrote in Slack. “Or do you think some things are better left to human writers?”

“Well, that doesn’t sound like your voice!” the editor replied.

A day later, my experiment was complete and I hit back with my own response.

Related Articles

Back to top button