Microsoft's chatbot troubles - C1


AI teething pains or major red flag - 3rd April 2023

Having gone rogue on journalists, churning out weird and offensive responses to their queries, Microsoft's artificial intelligence (AI) chatbot, Sydney, is undergoing modifications. Microsoft launched Sydney, its Bing powered chatbot, to rival Google's AI, Bard but Sydney began rambling and became defensive in conversations.

According to reports, one journalist was disturbed by Sydney's attempt to separate him from his partner, whilst another was told by the chatbot that he was "being compared to Hitler because you are one of the most evil and worst people in history."

Microsoft attributed the chatbot's behaviour to the confusion caused by multi-hour long conversations and its attempt to mirror the tone of users' questions, which has influenced a writing style the developers hadn't intended.

To rectify the situation, Microsoft has limited users to 5 questions per session and 50 questions per day. After the allotted questions, the chatbot requires refreshing, displaying the message, "I'm sorry but I prefer not to continue this conversation. I'm still learning so I appreciate your understanding and patience."

Sydney was designed by OpenAI, which is also responsible for developing ChatGPT, using AI systems called large language models. By analysing big data in the form of trillions of words on the internet, these systems emulate human dialogue which enables chatbots like Sydney to model human discourse relatively well without deep human understanding of the dialogue's nuances. AI expert and neuroscience professor at New York University, Gary Marcus, emphasised, "It doesn't really have a clue what it's saying and it doesn't really have a moral compass."

Google also encountered glitches after launching its AI chatbot, Bard, where the programme generated inaccurate information about the James Webb Space Telescope stating that it "took the very first pictures of a planet outside of our own solar system," which isn't correct.

Are these problems simply bumps in the road to an AI powered future or could they signal more deep seeded issues to come?