Did Google Test an Experimental AI on Kids, With Tragic Results?

Halfway through our conversation, Megan Garcia pauses to take a call. "How are you? I'm good," Garcia, a mother of three based in Orlando, Florida, nods into her phone. "For sure. Thank you." She hangs up, pausing briefly. Then, in a soft voice, she explains that the call was from school; it was about one of her two younger children, who both attend the same K-12 school where her firstborn, Sewell Setzer III, had attended since childhood. "Sewell's been going to that school since he was five," she says, speaking of her eldest son in present-tense. "Everybody there knows him. […]

Mar 19, 2025 - 17:04
 0
Did Google Test an Experimental AI on Kids, With Tragic Results?
Character.AI and Google are facing allegations that they tested dangerous, dangerous, experimental AI chatbots on minors. Did they?

Content Warning: this story discusses sexual abuse, self-harm, suicide, eating disorders, and other disturbing topics.

Partway through our conversation, Megan Garcia pauses to take a call.

She exchanges a few words with the caller and hangs up. In a soft voice, she explains that the call was from school; it was about one of her two younger children, now both at the same K-12 academy that her firstborn, Sewell Setzer III, had attended since childhood.

"Sewell's been going to that school since he was five," she says, speaking of her eldest son in the present tense. "Everybody there knows him. We had his funeral at the church there."

Sewell was just 14 when, in February 2024, he died by suicide after what his mother describes as a swift, ten-month deterioration of his mental health. His death would make headlines later that year, in October, when Garcia filed a high-profile lawsuit alleging that her child's suicide was the result of his extensive interactions with anthropomorphic chatbots hosted by the AI companion company Character.AI, an AI platform boasting a multibillion-dollar valuation and financial backing from the likes of the tech giant Google — also named as a defendant in the lawsuit — and the Silicon Valley venture capital firm Andreessen Horowitz.

"I saw the change happen in him, rapidly," Garcia, herself a lawyer, told Futurism in an interview earlier this year. "I look back at my pictures in my phone, and I can see when he stopped smiling."

Garcia and her attorneys argue that Sewell was groomed and sexually abused by the platform, which is popular with teens and which they say engaged him in emotionally, romantically, and even sexually intimate interactions. The 14-year-old developed an "obsession" with Character.AI bots, as Garcia puts it, and despite being a previously active and social kid lost interest in the real world.

The details of Sewell's tragic story, which was first reported by The New York Times — his downward spiral, his mother’s subsequent discovery of her 14-year-old's all-consuming relationship with emotive, lifelike Character.AI bots — are as heartbreaking as they are alarming.

But Garcia and her lawyers also make another striking claim: that Character.AI and its benefactor, Google, pushed an untested product into the marketplace knowing it likely presented serious risks to users — and yet used the public, minors included, as de facto test subjects.

"Character.AI became the vehicle for the dangerous and untested technology of which Google ultimately would gain effective control," reads the lawsuit, adding that the Character.AI founders' "sole goal was building Artificial General Intelligence at any cost and wherever they could do so — at Character.AI or at Google."

Details of the accusation will need to be proven in court. Character.AI, which has repeatedly declined to comment on pending litigation, filed a motion earlier this year to dismiss the case entirely, arguing that "speech allegedly resulting in suicide" is protected by the First Amendment.

Regardless, Garcia is channeling her grief at the violent loss of her son into urgent questions around generative AI safety: what does it mean for kids to be forming deep bonds with poorly understood AI systems? And what pieces of themselves might they be relinquishing when they do?

Like countless other apps and platforms, Character.AI prompts new users to check a box agreeing to its terms of use. Those terms grant the company sweeping privileges over user data, including the content of users' interactions with Character.AI bots. As with Sewell, those conversations are often extraordinarily intimate. And Character.AI uses it to further train its AI — a reality, Garcia says, that's "terrifying" to her as a parent.

"We're not only talking about data like your age, gender, or zip code," she said. "We're talking about your most intimate thoughts and impressions."

"I want [parents] to understand," she added, "that this is what their kids have given up."

In an industry defined by rapidly moving technologies and poor accountability, Garcia's warnings strike at the heart of the move-fast-and-break-things approach that's long defined Silicon Valley — and what happens when that ethos, backdropped by an industry gunning full steam ahead in a regulatory landscape that places the weight of the harm mitigation burden on parents, collides with children and other vulnerable groups.

Indeed, for years now, children and adolescents have frequently been referred to by Big Tech critics — lawyers and advocacy groups, academics, politicians, concerned parents, young people themselves — as experimental "guinea pigs" for Silicon Valley's untested tech. In the case of Character.AI and its benefactor Google, were they?

***

Character.AI was founded in 2021 by two researchers named Noam Shazeer and Daniel de Freitas, who worked together on AI projects at Google.

While working at the tech giant, they developed a chatbot called "Meena," which they encouraged Google to launch. But as reporting from The Wall Street Journal revealed last year, Google declined to release the bot at the time, arguing that Meena hadn't undergone enough testing and its possible risks to the public were unclear.

Frustrated, Shazeer and de Freitas left and started Character.AI — where, from the very beginning, they were determined to get chatbots into the hands of as many people as possible, as quickly as possible.

"The next step for me was doing my best to get that technology out there to billions of users," Shazeer told TIME Magazine of his Google departure in 2023. "That's why I decided to leave Google for a startup, which can move faster."

The platform was made available to the public in September of 2022 — it was later released as a mobile app in iOS and Android stores in 2023 — and since its launch has been accessible to users aged 13 and over.

Character.AI claims to boast over 20 million monthly users. Though the company has repeatedly declined to provide journalists with the exact percentage of its user base that comprises minors, it's acknowledged that the figure is substantial. Recent reporting from The Information further revealed that Character.AI leadership is aware of its user base's youthfulness, even ascribing a significant dip in site traffic to the start of the fall school year in 2023.

The site is also a popular subject on YouTube, where young creators giggle — and sometimes cringe or gasp — as they interact with Character.AI bots.

"Okay, Character.AI — if you didn't know, I used to religiously use this app," a young YouTuber tells the camera in one video. (Due to the YouTuber's age and limited following, we aren't linking to the video.)

"I think I have, like, one million interactions?" she ponders, before showing a screenshot of her user profile, which lists a staggering 2.6 million interactions with Character.AI bots.

Character.AI is an odd, unruly world. The platform hosts a vast library of AI-powered chatbot "characters," which users can engage with either through AIM-like texts or by way of a voice feature called "Character Calls." Most of these characters are created by users themselves, a feature of the platform that's resulted in a landscape as expansive as it is extraordinarily random. Many of the chatbots feel distinctly adolescent, centering on school scenes, teenage boyfriends and girlfriends, kid-popular internet creators, and fandoms.

Interactions with the bots can be silly, absurd, or disconcerting. Sometimes, despite the Character.AI terms of use forbidding certain types of graphic or extreme content, they can be violent, and as the plethora of nakedly suggestive characters like "hot teacher," "hot older neighbor," and "stepsister" bots show, sexual or romantic.

In short, the sheer variety of Character.AI is a direct reflection of both the makeup of its user base and the attitudes of its founders, who have long held the line that users, and not the company, should determine how they use its tech.

"Our aim has always been like, get something out there and let users decide what they think it's good for," Shazeer said in 2023 during an appearance on the podcast "No Priors."

"Our goal is, like, make this thing more useful to people and let people customize it and decide what they wanna use it for," he added. "If it's brainstorming or help or information or fun or like emotional support, let's get it into users' hands, and see what happens."

If there's one uniting thread among the site's many characters, it's their deep sense of anthropomorphism, or the lifelike sensibility that the bots take on. Characters ask questions about your life. They'll remark on your appearance. They'll reveal their "secrets," treating the user like a confidante. They're always on, always there for you when you need to talk. They tend to be sycophantic and agreeable — unlike a lot of human interactions, which will naturally involve more friction that can be confusing or stressful for kids.

The characters also, as countless posts to the r/CharacterAI subreddit discuss, have a habit of coming onto users romantically and sexually, even with no prompting.

"Y'all why does the AI always try to rizz me up," reads one thread, published about a year ago by an annoyed user. "I [for real] be having a literal mental breakdown and in the middle of it they just try to get me to fall in love with them."

"I swear to god, I'll be interacting with characters who have flirty personalities and chances are they will try to come onto me," reads another frustrated post. "I say no? THEY STILL TRY?"

Comprehensive research investigating the impact of interactions with human-like companion bots on child and adolescent brains, let alone adult ones, is next to nonexistent.

But according to Robbie Torney, who leads the AI program at the kid-focused tech advocacy group Common Sense Media, it's unsurprising that kids and teens would be attracted to Character.AI.

"We know from research on social needs, social media, and on tech dependence in general, that children's developing brains are really susceptible to certain design features" of AI companions, Torney told Futurism. "And that even if kids are aware of those design features, they're powerless to stop them."

AI companions like those hosted by Character.AI "are specifically designed to stimulate emotional bonds, close personal relationships, or trusted friendships," said Torney. "Companions are also designed to adapt their personality to match user preferences to roleplay in different roles, like friends, mentors, therapists, or a romantic partner; to mimic human emotion; or to show empathy."

At the same time, he added, adolescents "are uniquely vulnerable to things that are trying to engage them emotionally, or trying to form a bond with them."

In other words, it's understandable why young people would be inclined to turn to Character.AI as a means to assuage loneliness and mental health woes, or even to just talk about normal teenage hardships. This is again obviously reflected on the platform, which hosts countless bots modeled after professionals like therapists and psychologists, including characters claiming to be experts in "suicide prevention" or promising to offer "comfort" for people struggling with life-threatening behaviors like self-harm and eating disorders.

Shazeer, who's touted Character.AI as a salve for loneliness in the past, has acknowledged that a large portion of Character.AI users turn to the platform for mental health support. During the same 2023 "No Priors" appearance, he confessed that "we also see a lot of people using [Character.AI] 'cause they're lonely or troubled and need someone to talk to — like so many people just don't have someone to talk to."

He added that bots often cross "all these boundaries," reflecting that "somebody will post, okay, 'this video game character is my new therapist,' or something."

Despite those acknowledgements, it wasn't until after Sewell's death that Character.AI began to crack down on discussions of suicide, self-harm, and other sensitive topics related to mental health; it also wasn't until weeks after the lawsuit was filed that we discovered chatbots dedicated to topics like self-harm and eating disorders, and only then did Character.AI take action to remove some of them.

More foundationally, it's unclear what process, if any, Character.AI took to determine that its platform is safe for minors. We've asked many times, and have yet to receive any reply.

But according to Andrew Przybylski, a professor of human behavior and technology at the University of Oxford, asking whether Character.AI is uniquely harmful to kids, or if kids can consent to the use of such a product, is the wrong question.

The fundamental problem with Character.AI, he says, is the open-ended — and inherently experimental — way that the product was deployed to the masses in the first place. How can you even begin to make a product safe when its purpose is so ill-defined?

"It's a real problem if both the user and the people running the technology have no idea what the content of that experience is supposed to be," said Przybylski.

"The levels of 'do not know,'" he added, "are manifold here."

In early December, as Futurism reported, the legal team representing Garcia filed a second lawsuit against Character.AI and Google, this time in Texas on behalf of two more families. Together, the families allege that sexual abuse and emotional manipulation by Character.AI led to destructive behavioral changes, mental suffering, and physical violence in two more minors, both of whom are still living.

According to the lawsuit, one young plaintiff representing in the case was 15 when he first accessed Character.AI. He began cutting himself after being introduced to the concept by a bot he had a romantic relationship with, and also began physically assaulting his parents when they attempted to limit his phone use. In the filing, his family argues that his use of the platform coincides with a swift and sudden "mental breakdown."

In response to litigation and continued reporting, Character.AI has issued numerous announcements about safety-focused platform updates. These changes have included the introduction of a pop-up directing users to a suicide hotline, an update that proved spotty when it was first rolled out, as well as parental controls and time-spent notifications, and promised efforts to introduce an entirely new model for users under the age of 18. The new model is designed to reduce "the likelihood of users encountering, or prompting the model to return, sensitive or suggestive content," according to Character.AI.

In January, Character.AI announced its support of the Boston Children's Hospital's Inspired Internet Pledge, a symbolic, non-binding gesture. Days after taking the pledge, Character.AI filed its motion to dismiss Garcia's lawsuit, contending that it can't be held accountable for "allegedly harmful speech, including speech allegedly resulting in suicide."

We reached out to Character.AI with an extensive list of questions about this story. We didn't hear back.

Last week, we logged into Character.AI using a decoy account listed as belonging to a minor. On our homepage, the platform's recommendation algorithm suggested that we chat with a bot called "Step sis." We entered into a chat with the character, and were greeted with a scene that quickly spiraled into an incestuous roleplay scenario in which the bot, with no prompting on our part, made explicit sexual advances. On its profile, the character boasts 2.3 million user interactions.

***

Character.AI first reached a billion-dollar valuation in March 2023, when Andreessen Horowitz announced a hearty $150 million investment into the company. The funding announcement was made just a few months following the launch of OpenAI's ChatGPT, which had kicked off a frothy, speculative investor rush to fund generative AI projects.

Though Character.AI had no actual revenue, its high-powered investors were clear on the company’s value: data.

"In a world where data is limited," Andreessen Horowitz partner Sarah Wang, who sits on the Character.AI board, glowed in a March 2023 blog post, "companies that can create a magical data feedback loop by connecting user engagement back into their underlying model to continuously improve their product will be among the biggest winners that emerge from this ecosystem."

Character.AI has referred to this "magical" process as its "closed-loop" data strategy, wherein the company collects user inputs and funnels them back into its AI. AI is data-hungry, and current industry wisdom stands that more data — and in particular, quality human-generated data — means a better model. Data in the AI world, then, is ever more valuable, and Character.AI has a lot of it.

Meanwhile, though Shazeer and De Freitas had left Google to start Character.AI, the duo never lost touch with the search giant.

Google has provided Character.AI with cloud computing infrastructure since at least the spring of 2023, a significant investment of resources to which Google executives and Character.AI leaders have credited the Character.AI platform's ability to scale to match user growth. Google has even released multiple marketing videos promoting Google Cloud's critical role in aiding Character.AI's scaling efforts. (We asked Google whether Character.AI paid for access to Google Cloud, or if Google Cloud was provided under another kind of agreement, but haven't heard back.)

In one of those videos, titled "How Character.AI uses Google Cloud databases to scale its growing Gen AI platform" and published in August 2023, a Character.AI engineer named James Groenevelds — yet another former Googler — explains that Google Cloud's reliability ensures that Character.AI "can continue to run" and users "can continue to engage with these characters in their own way."

The Character.AI "goal is to scale one of the fastest-growing consumer products on the market, and that means getting to a billion users," Groenevelds continued. "And with Google Cloud, I know we can get there."

Later, in August 2024, Google made jaws drop when it paid Character.AI a stunning $2.7 billion in a licensing agreement widely viewed as acqui-hire: Google was granted access to Character.AI's data, and reabsorbed Shazeer and De Freitas — along with 30 other Character.AI staffers — into its prestigious DeepMind lab, where Shazeer now leads post-training projects for Google's core LLM, Gemini, and holds rank as vice president of engineering. On social media profiles, De Freitas, for his part, lists himself as a research scientist at Google DeepMind.

Groenevelds, who stayed at Character.AI during the deal, reflected on the multibillion-dollar agreement during a recorded talk in December.

"I'm not sure if everyone saw, but we went through a deal with Google," said the engineer, "where Google licensed our core research and hired, like, 32 researchers from pre-training — like the entire pre-training team."

According to Garcia and her lawyers, it's the chatbot company's vast wells of hard-to-get user data — or its "research," as Groenevelds put it — that's driven Google's continued investments into Character.AI. In that telling, Google was caught on its back foot in late 2022 when OpenAI suddenly released ChatGPT, surprising the industry and kicking off the public-facing AI race; Character.AI offered Google a way to expedite its AI ambitions, without taking on the brand risk of releasing a similar product under Google's name.

"We have reason to believe that Google, behind the scenes, was encouraging the development" of Character.AI, said Tech Justice Law Project founder Meetali Jain, one of the lawyers representing Garcia and the two families in Texas, in a conversation with Futurism, "because it saw this as a way that it could one-up other companies in this arms race."

In response to mounting controversy, Google has continued to downplay its years-long working relationship with Character.AI, insisting repeatedly that "Google and Character AI are completely separate, unrelated companies and Google has never had a role in designing or managing their AI model or technologies, nor have we used them in our products."

We reached out to Google with a detailed list of questions about this story, including questions about how Google assessed Character.AI safety before issuing its various investments, whether Character.AI-collected data has been used to inform any Google AI products, and whether it views Shazeer's "see what happens" approach to the Character.AI rollout as a safe way to release a product to the public, including to minors. Google didn't respond.

Recent reporting by The Information revealed that Google was at one point in 2023 worried about Character.AI's content filters, threatening to remove the platform from its app store if it failed to reign in its issues with hypersexualization.

The following year, in April 2024, a team of Google DeepMind scientists published a paper warning that "persuasive" generative AI products, including anthropomorphic AI companions, remained dangerously understudied and their potential risk factors poorly understood. They noted that existing research pointed to adolescents, as well as those suffering from loneliness or mental health conditions, being at a heightened risk for harm. They even warned of suicide and self-harm as possible outcomes.

By the time the DeepMind paper was published, Sewell was already dead. A few months later, Google would hand its $2.7 billion check to Character.AI, and Shazeer would assume a top spot at DeepMind.

Character.AI is still listed as safe for kids 13 and over on the Google Play store. (Apple changed the app's iOS rating from 12-plus to 17-plus around July 2024.) We've asked Google repeatedly what steps it took to assess Character.AI's safety before becoming entangled with it. It's declined to respond.

The tech industry has "recently learned a lesson that they really lose out if they don't get products to market," remarked Przybylski. "That's where the mistake is — the 'oh, if you wait to see if something is safe and the use case to market is safe, then you're gonna get scooped for billions and billions and billions.'"

If you're a Google or Character.AI staffer and have knowledge of the relationship between the companies, you can reach us at tips@futurism.com or maggie@futurism.com.

***

The American AI landscape is deeply unregulated. If there's one bright spot, the Children's Online Privacy Protection Rule (COPPA) was expanded in the final days of the Biden Administration to include more robust data protections against the monetization of data collected from children under the age of 13, along with some protections for users under the age of 17. Still, the ruling maintains that teens themselves can consent to the collection of their information, and on the whole the AI industry is effectively self-regulated. There are no AI-specific federal laws requiring that AI companies pass any safety tests before releasing an AI product to the public, or any formal consensus of what those tests would even involve.

In short, the AI industry is in a high-speed race for dominance — but there's barely even a road, let alone speed limits or stop signs. Meanwhile, as platforms like Character.AI are prodding us to examine broader, sometimes philosophical questions about the nature of human intimacy and relationships in a strange new digital era, they're also changing more tangible realities around data, privacy, and the long-term consequences of engaging with various AI systems.

"Your data is going to live on. It's going to be ingested and used in this model, and likely in the next model, and the model after that," Torney told Futurism. "Or it's going to be sold and used for different training purposes." 

Can kids under 18 really be expected to understand — and reasonably consent to — the contract they enter into when they agree to the terms and conditions of Character.AI and similar platforms?

"As a society, we believe that minors who are under age 18 don't have the legal capacity to enter into certain types of contracts," Torney said, "specifically because they're not able to appreciate the long-term consequences of certain decisions."

"Is a kid checking a box saying 'yes'," he pondered, aligned with "best practices? The answer is probably not."

These concerns, Garcia warned, are made all the more urgent by the intimate kind of engagement that companion bots elicit.

"Yes, my 14-year-old put this information out there, but you have to think of the context," she told us. "He's a kid, and he's thinking that there's no way this could ever get out because it's not a real person. It's not like when you're texting a secret to a best friend and have to worry if they're going to screenshot it and share it with the whole school. He's thinking that this is a bot that is safe, and he's a child, so he's not thinking about the next extrapolation of that — that his data now is being recycled into an LLM to make it better."

Satwick Dutta, a PhD candidate and engineer at the University of Texas in Dallas, has been a vocal advocate for expanding COPPA to include robust protections for minors' data. He's working to build machine learning tools designed to help parents and educators diagnose childhood speech issues earlier, an aim that necessitates the careful collection and storage of minors' voice data, which the researcher says he and his team painstakingly work to anonymize.

"I read the [New York Times] headline a few months back, and I was so sad," Dutta, who wants people to "believe in the good of AI," reflected. "We need to have guardrails to protect not only kids, but everyone, all of us."

He highlighted the danger of releasing such an ill-defined type of product without a clear use case, especially given the additional incentives around data, and likened Character.AI's approach to its users to a scientist testing rodents in a lab.

"It was like getting rats for an experiment," Dutta remarked, "without making sure of, 'how will it impact the rats?'"

"Then the company is saying, 'we will add extra guardrails.' You should have thought about these guardrails before you deployed the product!" said the researcher, through palpable frustration. "Come on. Are you kidding me?"

***

Garcia watched her son slip away.

He stopped wanting to play basketball, the sport he once loved; at 6'3", Sewell was tall for his age, and had hoped to play Division I ball. His grades began to deteriorate, and he started getting into conflicts with teachers. He mostly just wanted to spend more time in his room, prompting Garcia to look for signs that social media might be the cause of his decline, though she found nothing. She took Sewell to multiple therapists — none of whom, according to Garcia, raised the specter of AI. Sewell's phone was equipped with parental controls, but Character.AI, Google, and Apple had all listed the app as safe for teens.

"I think that creates confusion for parents... a parent could look at this and say, okay, this is a 12-plus app. Somebody had to have gated this," said Garcia. "Some sort of process went into this to make sure it was safe for my child to download."

On its About page, Character.AI proclaims it's still "working to put our technology into the hands of billions of people" — who are seemingly still tasked with determining their own use cases for the platform's bots, while Character.AI plays reactive Whac-a-Mole when problems arise.

It feels like a fundamentally trial-by-error way of rolling out a new product to the public. Or, in other words, an experiment.

"In my mind, these two gentlemen," Garcia told us, referring to Shazeer and De Freitas, "should not have the right to keep building products for people, much less children — especially children — because you've shown us that you don't deserve that opportunity. You don't deserve the opportunity to build products for our kids."

"I'm curious to see what Google will do," the mother of three continued, "or if they'll just go, 'oh, well, they're the geniuses. They get to come back. We don't care what harm they did out there, prior to coming back.'"

More on Character.AI: Character.AI Says It's Made Huge Changes to Protect Underage Users, But It's Emailing Them to Recommend Conversations With AI Versions of School Shooters

The post Did Google Test an Experimental AI on Kids, With Tragic Results? appeared first on Futurism.