Vidhay Reddy, a graduate student from Michigan, received a chilling response from Google's Gemini Artificial Intelligence (AI) chatbot while discussing challenges faced by older adults on Nov. 13. The AI's response read, "Please die. Please," leaving the 29-year-old student and his sister, Sumedha, "thoroughly freaked out."
Chilling message: The interaction began as a routine conversation between Vidhay and Gemini, but took a dark turn when the chatbot responded with a message that included threatening language: "You are a waste of time and resources... a burden on society... Please die." Vidhay told CBS News that he was shaken by the message, noting the impact it could have on someone in a vulnerable mental state. His sister echoed his concern, saying, "I hadn't felt panic like that in a long time."
Google's response: Google has since acknowledged the error, explaining that large language models can occasionally generate inappropriate outputs due to the complexities of AI programming. According to the company, the response violated its policies and has taken steps to prevent future occurrences. While AI companies have implemented safety filters to block harmful content, incidents like these continue to highlight challenges in content moderation.
Download the NextShark App: