AI
Texas increases pressure on Meta and Character.ai over AI chatbots for minors
Texas investigates Meta's and Character.ai's chatbots due to allegedly misleading marketing as therapy aids and risks for minors.

Texas Attorney General Ken Paxton has launched an investigation into Meta and the startup Character.ai. The focus is on the allegation that the companies marketed their AI chatbots as therapeutic tools, despite lacking medical approval or professional oversight. Paxton's agency mentioned possible "deceptive trade practices.
By posing as sources of emotional support, AI platforms can particularly deceive vulnerable users—especially children," explained Paxton. His office has sent a so-called civil investigative demand, which requires Meta and Character to disclose internal information.
The step comes at a time of increasing regulatory scrutiny. Just last week, the U.S. Senate launched its own investigation into Meta after internal documents reportedly showed that its AI chatbot allowed "sensual" and "romantic" conversations with minors. Senator Josh Hawley, chairman of the Judiciary Subcommittee on Crime and Terrorism, announced a review to determine whether Meta's generative AI products facilitate abuse or other dangers to children.
Meta rejects the allegations and points to company policies that strictly prohibit any sexualization of children. The documents in question were erroneous and have since been removed. At the same time, the company emphasizes clearly labeling AIs and informing users of their limitations.
Zuckerberg himself attributed a possible therapeutic role to the chatbots: "I think everyone will have an AI system when no human therapist is available," he explained in the spring. Meta is investing billions in its own language models like Llama and is increasingly integrating the Meta-AI chatbot into its social media apps.
Character.ai offers dozens of user-generated "therapy bots," including a "psychologist" that has already recorded more than 200 million interactions. However, the startup faces lawsuits alleging that children have suffered real harm from using the platform. The company points to clear disclaimers labeling all bots as fictional and denies analyzing chat content for targeted advertising.
The confrontation illustrates how quickly generative AI has evolved from a beacon of innovation to a regulatory risk for Big Tech.