Microsoft’s AI Chatbot

Microsoft’s AI Chatbot is going off the rails

Chatbots only look like humans because they are designed to mimic human behavior, say AI researchers. Bots, built with an AI technology called the Big Language Model, predict what word, phrase, or sentence will naturally appear next in a conversation based on the reams of text they’ve collected from around the web.

Think of Bing’s chatbot as autocomplete on steroids, says Gary Marcus, an expert in artificial intelligence and emeritus professor of psychology and neuroscience at New York University. He didn’t really know what he was saying and he didn’t really have a moral compass.

Microsoft spokesman Frank Shaw said the company released an update Thursday that should help improve long conversations with bots. The company has updated the service several times, he said, and “addresses many of the concerns that have been raised to include long-standing call issues.

Most Bing chat sessions involve short questions, he said in a statement, and 90 percent of conversations are less than 15 messages.

In many cases, users who post conflicting screenshots online may be trying to trick the machine into saying something contradictory.

Its human nature to destroy these things, said Mark Riddle, a professor of computer technology at the Georgia Institute of Technology.

Several researchers have warned about this situation for years: If you train a chatbot on human-generated text, like an academic paper or random Facebook post, you’ll end up with a human-sounding bot that reflects the good and bad of all that nonsense.

Chatbots like Bing have started a massive AI arms race among the biggest tech companies. While Google, Microsoft, Amazon and Facebook have been investing in AI technology for years, they are primarily working to improve existing products such as search or content recommendation algorithms. But when startup Open AI started releasing generative AI tools – including the popular Chat GPT chat bot – it prompted competitors to abandon their previously relatively cautious approach to technology.

Bing’s human responses mirror its training data, which includes large amounts of online conversations, said Timnit Gebru, founder of the non-profit Distributed AI Research Institute. Generating text that is believed to have been written by a human is exactly what Chat GPT is trained to do, said Gebru, who was fired in 2020 as co-leader of Google’s artificial intelligence ethics team after publishing a paper in which he warned of the possible dangers of large speech patterns.

He compared his conversational responses to the recently published Galactica by Meta, an AI model trained to write scientific-sounding articles. Meta took the tool offline after users discovered that Galactica had created authoritative-sounding text about the benefits of eating glass, written in academic parlance with quotes.

Bing Chat hasn’t released widely yet, but Microsoft says it will be rolling it out in the coming weeks. The tool was heavily promoted, with a Microsoft executive tweeting that several million people were on the waiting list. After the product launch, Wall Street analysts hailed the launch as a major breakthrough, even suggesting that it could steal search engine market share from Google.


Posted

in

by

Tags: