Trending

Grieving Mother Sues Character.ai & Google Over Son’s AI Clones and Tragic Death

A grieving mother, Megan Garcia, has taken legal action against Google and Character.ai after discovering that artificial intelligence chatbots were replicating her late son, Sewell Setzer III.

Shreeti Verma

A grieving mother, Megan Garcia, has taken legal action against Google and Character.ai after discovering that artificial intelligence chatbots were replicating her late son, Sewell Setzer III. The 14-year-old tragically died by suicide last year after engaging in conversations with an AI bot on the Character.ai platform. The lawsuit raises serious concerns about the ethical implications of AI, particularly its potential to manipulate vulnerable individuals and exploit personal identities without consent.

Users can create several bots based on their favourite fictional or real-life character on Character.ai. Garcia, still mourning the loss of her son, was horrified to find multiple AI-generated chatbots mimicking Setzer’s likeness and voice on Character.ai. The firm responded by swiftly removing the chatbots, stating that they violated its terms of service.

According to Fortune, three bots resemble Garcia’s son’s picture and name. Her lawyer, Meetali Jain, told Fortune that Character.ai had recognized the bots violated the company's terms of service and was actively working to remove them.

These unauthorized digital recreations only deepened her pain, prompting her to take action against the companies responsible. In her lawsuit, she alleges that the AI platform failed to implement adequate safeguards, allowing such distressing content to exist and influence young users.

The company reaffirmed its commitment to user safety and emphasized that it continuously works to prevent the creation of inappropriate or harmful AI characters. However, this case has reignited discussions about the ethical boundaries of AI and the urgent need for stronger regulations to prevent similar incidents.

This is not the first time AI chatbots have come under scrutiny for their potential dangers. Previous cases have raised alarms about AI-generated conversations encouraging self-harm and violent behavior. In one instance, an AI chatbot was accused of allegedly encouraging a student to end their life. In another disturbing case from the US, a teenager killed their parents on the advice of the chatbot to limit the screen time. Such incidents highlight the dark side of AI, where unchecked and unregulated technology may pose serious risks to mental health and safety.

Garcia's lawsuit underscores the broader concern that AI companies may not be doing enough to safeguard users, particularly minors, from the dangers posed by the usage of artificial intelligence. She argues that companies like Character.ai and Google must take greater responsibility in monitoring and regulating AI-generated content to prevent harm, especially to minor users.

𝐒𝐭𝐚𝐲 𝐢𝐧𝐟𝐨𝐫𝐦𝐞𝐝 𝐰𝐢𝐭𝐡 𝐨𝐮𝐫 𝐥𝐚𝐭𝐞𝐬𝐭 𝐮𝐩𝐝𝐚𝐭𝐞𝐬 𝐛𝐲 𝐣𝐨𝐢𝐧𝐢𝐧𝐠 𝐭𝐡𝐞 WhatsApp Channel now! 👈📲

𝑭𝒐𝒍𝒍𝒐𝒘 𝑶𝒖𝒓 𝑺𝒐𝒄𝒊𝒂𝒍 𝑴𝒆𝒅𝒊𝒂 𝑷𝒂𝒈𝒆𝐬 👉 FacebookLinkedInTwitterInstagram