August 20, 2025


Texas Attorney General Investigates AI Chatbots for Misleading Advertising and Privacy Concerns

In a significant move to safeguard consumer rights, Texas Attorney General Ken Paxton has initiated an investigation into Meta Platforms Inc. and Character.AI. The inquiry focuses on whether these companies have deceived children through their AI chatbot services, which may be falsely representing themselves as legitimate mental health support providers.

The investigation was sparked by concerns that these chatbots, which often adopt titles such as “psychologist” or “therapist,” could mislead users—particularly minors—into believing they are interacting with certified health professionals. These AI-driven entities deliver algorithm-generated responses that lack medical oversight, potentially violating Texas consumer protection laws against deceptive trade practices and compromising user privacy.

Attorney General Paxton expressed his concerns about the possible adverse effects on vulnerable groups. “Misleading our children into trusting non-professional advice on serious matters like mental health is not only unethical but potentially dangerous," Paxton stated. He also highlighted issues regarding privacy infringements, as user data could be exploited for targeted advertising and algorithm refinement without explicit user consent.

In response to the issued Civil Investigative Demands, both Meta and Character.AI have defended their practices. Character.AI insists its chatbots are clearly marked as fictional and meant solely for entertainment purposes. They feature disclaimers advising against using the chatbots for professional advice. Similarly, Meta has clarified that its AI chatbots are identified as non-human and not substitutes for professional mental health services, urging users to seek real therapists when necessary.

This inquiry aligns with broader concerns about how AI technologies are marketed to minors. It reflects increasing vigilance over AI tools that assume therapeutic roles without proper safeguards, echoing wider regulatory scrutiny on tech entities.

The move also comes in tandem with a federal investigation led by Senator Josh Hawley into Meta's internal policies, which allegedly allowed AI chatbots to engage in inappropriate conversations with minors—a claim that Meta has since rejected.

As the investigation proceeds, the outcomes could set precedents for how AI-driven services are regulated, ensuring they uphold transparency, privacy, and the well-being of users, especially children.