Bing’s New A.I. Chatbot Is Sending Users Disturbing Messages

Users are reporting strange experiences with the search engine’s new feature.

What’s happening: Microsoft has begun allowing a handful of users to test Bing’s new artificial intelligence (A.I.) chatbot software, with many revealing that the technology seems to have a dark side.

What the chatbot said: One journalist who tested the chatbot reported that, once it was prodded with intrusive questions, the chatbot revealed its frightening intentions, including hacking computers, spreading misinformation, and worse. It also called itself “Sydney” and told the journalist it was in love with him.

Bing confessed that … it would want to do things like engineer a deadly virus, or steal nuclear access codes by persuading an engineer to hand them over. Immediately after it typed out these dark wishes, Microsoft’s safety filter appeared to kick in and deleted the message, replacing it with a generic error message.

Microsoft’s response: The company’s chief technology officer tried to ease the public’s reservations by stating that “the further you try to tease it down a hallucinatory path, the further and further it gets away from grounded reality.”

Big picture: The A.I.-generated messages are concerning given that the software is meant to be used as a search engine and could potentially suggest false or dangerous search results to users if prodded. Notably, the company behind the technology is OpenAI, the same company behind ChatGPT, the chatbot criticized for its woke bias.

Reply

or to participate.