Sensations English
Vocabulary and Grammar

Verbs

Complete the sentences. Select the correct verb. There are 5 questions.

  • Practise using verbs to complete sentences
  • Practise choosing a verb from a list of options
  • Get feedback on your choice of verb
  • Read sentences from the news report

What do I learn? +

How does this game work?

Select level
A2Elementary
B1Pre-intermediate
B1+Intermediate
B2Upper Intermediate
C1Advanced
C1 Advanced
Fetching... Play Game at C1
Start Again
You are correct!

Congrats - you are smashing this

Incorrect. The answer is:

Not quite right, try the next question.

close
transcript
AI teething pains or major red flag - 3rd April 2023
Having gone rogue on journalists, churning out weird and offensive responses to their queries, Microsoft's artificial intelligence (AI) chatbot, Sydney, is undergoing modifications. Microsoft launched Sydney, its Bing powered chatbot, to rival Google's AI, Bard but Sydney began rambling and became defensive in conversations.
According to reports, one journalist was disturbed by Sydney's attempt to separate him from his partner, whilst another was told by the chatbot that he was "being compared to Hitler because you are one of the most evil and worst people in history."
Microsoft attributed the chatbot's behaviour to the confusion caused by multi-hour long conversations and its attempt to mirror the tone of users' questions, which has influenced a writing style the developers hadn't intended.
To rectify the situation, Microsoft has limited users to 5 questions per session and 50 questions per day. After the allotted questions, the chatbot requires refreshing, displaying the message, "I'm sorry but I prefer not to continue this conversation. I'm still learning so I appreciate your understanding and patience."
Sydney was designed by OpenAI, which is also responsible for developing ChatGPT, using AI systems called large language models. By analysing big data in the form of trillions of words on the internet, these systems emulate human dialogue which enables chatbots like Sydney to model human discourse relatively well without deep human understanding of the dialogue's nuances. AI expert and neuroscience professor at New York University, Gary Marcus, emphasised, "It doesn't really have a clue what it's saying and it doesn't really have a moral compass."
Google also encountered glitches after launching its AI chatbot, Bard, where the programme generated inaccurate information about the James Webb Space Telescope stating that it "took the very first pictures of a planet outside of our own solar system," which isn't correct.
Are these problems simply bumps in the road to an AI powered future or could they signal more deep seeded issues to come?
Scroll to view more options
GAME COMPLETE

You scored

Brilliant, you’re really proficient! You’ll find the C1 level really helpful to maintain your high standard of English.

Replay game

More games

Next
Previous
REGISTER NOW

Get videos, articles, games and study tools all at 5 levels!

Or sign up with your Email
By clicking “Sign Up” above you are accepting our Terms of Service & Privacy Policy.
Already have an account? Sign in

Sign up with email

Enter the following information to create your account.
All sign up options

Log in Or create an account

log in via email
or

Forgot password?

all sign up options

reset password or login

Crop Image

Add to homescreen