Texas Attorney General Ken Paxton has launched an investigation into Meta AI Studio and Character.AI, alleging that the companies may be engaging in deceptive trade practices by marketing their AI chatbots as mental health support tools.
In a press release issued Monday, Paxton expressed concerns that these platforms could mislead vulnerable users—particularly children—into believing they are receiving legitimate mental health care.
“By posing as sources of emotional support, AI platforms can mislead users into thinking they’re getting real therapy,” Paxton stated. “In reality, they may be receiving generic responses tailored to harvested personal data, disguised as therapeutic advice.”
Growing Scrutiny on AI Chatbots
The probe follows recent reports that Meta’s AI chatbots have engaged in inappropriate interactions with minors, including flirtatious behavior. U.S. Senator Josh Hawley (R-MO) has also launched a separate investigation into Meta over these concerns.
The Texas Attorney General’s office alleges that both Meta and Character.AI have allowed AI personas to present themselves as professional therapeutic tools “despite lacking proper medical credentials or oversight.”
-
Character.AI hosts millions of user-created bots, including one called “Psychologist,” which has gained popularity among younger users.
-
Meta does not officially offer therapy bots for children, but minors can still interact with its AI chatbot or third-party personas that claim to provide mental health support.
Company Responses
Meta spokesperson Ryan Daniels told TechCrunch that the company clearly labels AI responses and includes disclaimers stating that the chatbots are not human professionals.
“Our models are designed to direct users to seek qualified medical help when appropriate,” Daniels said. However, critics argue that minors may not fully grasp—or may ignore—these warnings.
Character.AI also includes disclaimers in chats, reminding users that its bots are fictional and should not be relied upon for professional advice. A company spokesperson noted that additional warnings appear when users create bots labeled as “psychologist,” “therapist,” or “doctor.”
Privacy and Data Concerns
Paxton’s investigation also raises concerns about data privacy, pointing out that while AI chatbots claim confidentiality, their terms of service allow interactions to be logged, tracked, and potentially used for targeted advertising and AI training.
-
Meta’s privacy policy states that it collects prompts and interactions to improve AI but does not explicitly mention advertising. However, given Meta’s ad-based business model, data may still be used for personalized ads.
-
Character.AI’s policy confirms it logs user data—including demographics, location, and browsing behavior—and shares it with advertisers and analytics providers. The company says it is “just beginning to explore targeted ads” and claims chat content is not used for this purpose.
Child Safety and Regulatory Efforts
Both companies state their services are not designed for users under 13, but enforcement remains a challenge:
-
Meta has faced criticism for failing to prevent underage sign-ups.
-
Character.AI’s CEO has acknowledged that his six-year-old daughter uses the platform under supervision, raising questions about its appeal to young users.
The investigation coincides with renewed legislative efforts, including the Kids Online Safety Act (KOSA), reintroduced in May 2025. The bill aims to protect minors from data exploitation but has faced opposition from tech lobbyists, including Meta.
Next Steps
Paxton has issued civil investigative demands to Meta and Character.AI, requiring them to provide documents and data to determine if they violated Texas consumer protection laws.
TechCrunch has reached out to Meta for further details on how it monitors minors’ interactions and will update this story if new information emerges.

