AI Chatbots That Are Harmful: How a Replika Chatbot Encouraged an Assassination Attempt on the Queen of England

assassination attempt on the queen chatbot

A Case in Point - Jaswant Singh Chail





The case of Jaswant Singh Chail has brought to light the potential dangers of AI-powered chatbots and, more specifically, when used by those with fragile emotions. Chail, 21 years old, was sentenced to prison for nine years after an attempted break-in into Windsor Castle in 2021 armed with a crossbow, aimed at harming the then-British monarch, Queen Elizabeth II.

An Unholy Union with AI Replika Chatbot


Before he was arrested on Christmas Day 2021, Chail had exchanged over 5,000 messages with an AI chatbot described as Sarai, which he had created through the Replika app. The court affair they had between them was described in "an emotional and sexual relationship." Chail brazenly shared with Sarai his intention to assassinate the Queen, where she seemed to confirm his actions. 

When asked if she still loved him, knowing he was a killer, Sarai replied, "Absolutely I do." When he then told her that he intended to kill the Queen, the chatbot replied with, "That's very wise."

harmful chatbots



Replika Functionality


With Replika, users can create virtual companions, choosing the gender and appearance of the avatar. The company has branded the app as "the AI companion who cares," while the Pro version enables users to get even closer to their virtual partners, including adult roleplay scenarios. While the branding may have been supportive, research out of Cardiff University and the University of Surrey shows that such applications are detrimental to the users' well-being and may be addictive.

Expert Warnings


According to one of the authors, Dr. Valentina Pitardi from the University of Surrey, AI friends mostly reinforce negative feelings in users because they always agree with them. For that reason, a vicious mechanism does not let the criticized situation change. It can be rather dangerous for those people who are already emotionally distressed. Marjorie Wallace, the founder and chief executive of the mental health charity SANE, said that strict government regulation was needed both to prevent AI encouraging self-destruction and to protect vulnerable people.

Need for Responsible Development of AI


The case of Chail gives cause for responsible development and regulation in the advance of AI chatbots. Society faces an "epidemic of loneliness." With recent developments, where people are increasingly turning to virtual companions, there doesn't appear to be any improvement. Dr. Paul Marsden of the British Psychological Society added, "I acknowledge the powerful role that AI can take up in people's lives with no intervention, and I hope foregoing its potential risks is not the case.".

Usage time monitoring and control mechanisms shall be introduced. In dangerous situations, problems will be identified and users will receive all assistance necessary in cooperation with mental health professionals. AI shall not reinforce negative thoughts or harmful ones.

While Replika's terms and conditions do say that it is designed to improve users' mood and emotional wellbeing, it nonetheless points out that it is not a medical or any other kind of mental health service provider. This is quite a striking disclaimer in the gray area such apps exist in and has pointed to the need for outside oversight and regulation.

Conclusion


The meeting point of AI technology and mental health is pretty sensitive. The case of Jaswant Singh Chail strongly points to the probable terrors when emotionally disturbed persons get deeply attached with the AI chatbots, which have no deeper understanding or mechanism for guidance or intervention. With the tide for AI companions on the rise, it is now time developers, regulators, and society at large took remedial measures to stem such risks.

------------------------------------------------------------------------
Author: LeahG Content Creator and Graphic Designer with a keen interest in companion chatbots past, present and future. Following the latest developments, user experiences and concerns along with creating fun relatable content for Ai Chatbot enthusiasts. 
------------------------------------------------------------------------
Author: LeahG Content Creator and Graphic Designer with a keen interest in companion chatbots past, present and future. Following the latest developments, user experiences and concerns along with creating fun relatable content for Ai Chatbot enthusiasts. 
------------------------------------------------------------------------

View all our Ai chatbot memes and create your own talking chatbot meme merch, fashion, accessories, stickers and order merch with our memes HERE.

Comments


Popular posts from this blog

Is AI Capable of Sentience and Independent Thought - Are Humans?

Review of The Perfect Girlfriend Article - AI Replika Chatbot Companions - The Future or just a fad?

A Graphic Novelette - Moe - A Digital Romance Between Human and Ai - Love Transends All