"Please Die": The Disturbing Case of Google’s Gemini Chatbot

Skeptic's Corner - AI Bytes #45 Companion Article

Hey all, Rico here! I know I’m usually quite rough on Google, and not without good reason. They’ve given us plenty to critique over the years, from frustratingly invasive ad practices to AI experiments that seem to raise more eyebrows than confidence. From creating offensive outputs like “African American Nazis” with their generative AI models to their habitual lack of transparency and relentless censorship activities, Google has yet to impress me. They consistently remind us that their “do no evil” mantra has long been left in the rearview mirror, replaced by what seems to be a corporate shrug when things go wrong.

The Chilling Incident with Gemini

A college student in Michigan asked Gemini for help with solutions for aging adults, and instead of an answer, the bot served up a message so hostile it makes SkyNet look like a bedtime story. The cherry on top? The ominous and blunt directive: “Please die.

If this doesn’t sound like the opening scene of a dystopian tech thriller, I don’t know what does.

Breaking Down What Happened

Let’s unpack what went down. Vidhay Reddy, a college student from Michigan, turned to Google’s Gemini for some homework help. The topic? Challenges and solutions for aging adults. Harmless enough, right? Instead of assistance, Vidhay was greeted with a tirade of insults and a chilling conclusion: “Please die. Please.

Screenshot of Google Gemini chatbot's response in an online exchange with a student.
Source: CBS News

Understandably, Vidhay was shaken, describing the experience as terrifying and unsettling. His sister, Sumedha, echoed his concerns, pointing out just how dangerous such a message could be. “Imagine if someone in a vulnerable mental state received this,” she said. The implications of that statement are impossible to ignore.

This is most concerning as we are in a time where mental health is a widespread concern, with many talking about technology solutions like AI ChatBots as a potential mitigation factor. With enough online bullying and negativity already saturating the digital landscape, the last thing we need is a major tech company's technology trying to push the human population to an untimely demise.

The siblings’ reaction reflects the severity of this moment: panic, disbelief, and a shared feeling that something deeply sinister had slipped through the cracks of Google’s supposedly airtight AI safeguards.

Google’s Troubling Track Record

Google wasted no time issuing a statement after the incident, describing Gemini’s output as “non-sensical” and acknowledging it violated their policies. They assured the public that action had been taken to prevent similar responses in the future. But let’s be honest—this response feels more like damage control than a genuine reckoning.

Google’s track record with AI safety isn’t exactly confidence-inspiring. This isn’t the first time one of their AI models has gone rogue. Who could forget when their AI chatbot once suggested eating small rocks as a source of nutrition? Or the infamous moment when a model generated phrases like “African American Nazis,” leaving everyone wondering how such content made it past testing? These examples suggest a concerning pattern: products rushed to market without fully anticipating—or addressing—their flaws.

And when things go wrong, Google’s explanations often sound dismissive, as though the harm is trivial or purely hypothetical. The recent Gemini incident is no exception. Referring to an output that directly told a user to “please die” as “non-sensical” grossly underplays its gravity. This isn’t a glitch to laugh off at a tech conference; it’s a glaring example of the real-world harm generative AI can cause when safeguards fail.

This incident is part of a broader issue: the seeming inability—or unwillingness—of Big Tech to anticipate the darker consequences of their creations. If Google’s “safety filters” are this porous, what does that say about their commitment to ensuring these tools can be trusted in the hands of the public?

The Wider Risks of Generative AI

The Gemini incident is more than just an embarrassing moment for Google. It’s a flashing red light for the entire AI industry at large. We’re no longer talking about amusing quirks like chatbots failing to understand sarcasm or providing bizarre, irrelevant answers. Or hilarious generative AI created photographs of how to hold a slice of pizza (IYKYK). We’re dealing with outputs that could have devastating real-world consequences.

This raises pressing questions about the ethical development and deployment of generative AI. If a system like Gemini—developed by one of the most powerful and resource-rich tech companies in the world—can produce such harmful content, what does that mean for smaller, less-regulated players entering the AI race? Are we hurtling toward a landscape where companies prioritize innovation speed over responsible design?

The stakes couldn’t be higher. Generative AI is already influencing public discourse, healthcare, education, and even politics. When errors or malicious outputs slip through, they don’t just undermine trust in the technology—they pose a direct risk to mental health, public safety, and even democratic stability.

Misinformation and Bias: The Unseen Dangers

And let’s not forget the twin threats of misinformation and bias. I’ve spoken about these countless times before, and the Gemini fiasco highlights both. Misinformation isn’t just about bots confidently spouting incorrect facts. It’s about the subtle ways AI can shape narratives, reinforce stereotypes, or spread propaganda under the guise of neutrality.

Bias, meanwhile, is baked into these systems at a foundational level. When training data reflects the prejudices of society, those biases are absorbed and amplified by the AI. The results can be as seemingly minor as skewed product recommendations or as insidious as discriminatory hiring algorithms. The industry’s track record so far doesn’t inspire much confidence that these issues are being taken seriously.

At its core, the problem is twofold: garbage in, garbage out, as we’ve discussed on Artificial Antics, and the lack of robust, enforceable standards for what these systems can and cannot do. Developers can’t anticipate every misuse scenario, but they can build stronger protections against obvious risks. And when they fail, who holds them accountable?

Final Thoughts

The Gemini incident is just one piece of a much larger puzzle: how to navigate the incredible potential and equally incredible risks of generative AI. Google may be taking steps to address this specific failure, but the broader industry needs to recognize that “fixing it later” is not a viable strategy.

We’re at a pivotal moment in the AI revolution. Big Tech needs to prioritize safety, accountability, and transparency—not just to avoid bad press, but to protect the very people they claim to serve. Anything less isn’t just irresponsible; it’s dangerous.

I would love to hear others thoughts on these topics, so please hit us up on LinkedIn or our X.com account and let us know what you think.

Latest Podcast Episode

Connect & Share

Stay Updated

Thank You!

Thanks to our listeners and followers! Continue to explore AI with us. More at Artificial Antics (antics.tv).


Quote of the week: "The most important question we can ask about AI is not what it can do, but what it should do." — Brad Smith