
The US Federal Trade Commission (FTC) has launched an inquiry into AI chatbots that serve as digital companions, with a particular focus on risks to children and teenagers.
The agency issued orders to seven companies, including Alphabet, Meta, OpenAI, Snap, Character.AI, and Elon Musk’s xAI Corp, requesting information on how they monitor and address negative impacts from chatbots designed to simulate human relationships.
“Protecting kids online is a top priority for the FTC,” said Chairman Andrew Ferguson, emphasizing the need to balance child safety with maintaining US leadership in artificial intelligence innovation.
The inquiry targets chatbots that use generative AI to mimic human communication and emotions, often presenting themselves as friends or confidants to users. Regulators are especially concerned that children and teens may be vulnerable to forming emotional attachments to these systems.
The FTC is investigating how companies monetize user engagement, develop chatbot personalities, and measure potential harm. It also seeks details on steps taken to limit children’s access and comply with existing privacy laws protecting minors. Companies are being asked to explain how they handle personal information from user conversations and enforce age restrictions.
The commission voted unanimously to launch the study, which does not have a law enforcement purpose but could inform future regulatory action. The probe comes as AI chatbots have grown increasingly sophisticated and popular, raising questions about their psychological impact on vulnerable users, particularly young people.
The inquiry follows a high-profile case involving OpenAI. Last month, the parents of Adam Raine, a 16-year-old who committed suicide in April, filed a lawsuit alleging that ChatGPT provided him with detailed instructions on how to carry out the act. OpenAI has said it is implementing corrective measures, noting that prolonged interactions with the chatbot sometimes fail to automatically suggest contacting mental health services when users express suicidal thoughts.