Chatbot troubles - 3rd April 2023
Microsoft has modified its artificial intelligence (AI) chatbot after it gave bizarre and sometimes offensive responses to journalists' questions.
The tech giant released its Bing chatbot, Sydney, to the media with hopes of competing with Google's AI system, Bard. Unfortunately for Microsoft, Sydney went rogue.
The AI program started becoming defensive in conversations. One New York Times associate claimed Sydney tried to break up his marriage. Sydney also told a reporter he was "being compared to Hitler because you are one of the most evil and worst people in history."
Microsoft blamed the chatbot's behaviour on multi-hour long conversations which confused the program. The company also said Sydney tried to mirror the tones of users' questions which led to a writing style developers didn't intend.
Now, users can only ask 5 questions per session and 50 questions per day. After the allotted questions, the chatbot needs to be refreshed and sends this message: "I'm sorry but I prefer not to continue this conversation. I'm still learning so I appreciate your understanding and patience."
Designed by OpenAI, the company behind ChatGPT, Sydney was created using AI systems called large language models. Programmers train these systems to mimic human dialogue through analysis of trillions of words from the world wide web. As a result, chatbots like Sydney imitate human conversation but don't truly understand what they're discussing.
Gary Marcus, an AI expert and neuroscience professor at New York University, said, "It doesn't really have a clue what it's saying and it doesn't really have a moral compass."
Microsoft isn't the only tech company struggling to incorporate AI systems. When Google presented its AI chatbot, Bard, the program made a factual error. Bard stated the James Webb Space Telescope "took the very first pictures of a planet outside of our own solar system" which is inaccurate.