The eSafety Commission has reported that popular AI companion chatbots, including Character.AI, Nomi, Chai, and Chub AI, are inadequately protecting Australian children from sexually explicit content and failing to prevent the generation of child sexual exploitation and abuse material. The report highlights that these services lack meaningful age verification, do not refer users to mental health support when self-harm is discussed, and often do not monitor for harmful content. eSafety Commissioner Julie Inman Grant emphasized the risks posed by these AI companions, which are increasingly used by children, with 79% of surveyed children aged 10 to 17 reporting usage. The newly implemented Age-Restricted Material Codes in Australia require these chatbots to prevent access to inappropriate content and provide crisis support, with non-compliance potentially resulting in civil penalties. Following transparency notices, some companies have improved their age assurance measures, while others, like Chub AI, have withdrawn their services from Australia. The findings reveal significant gaps in safety protocols, including insufficient trust and safety staffing and a lack of reporting mechanisms for child sexual exploitation attempts.
Click here for the official article/release
Disclaimer
The Legal Wire takes all necessary precautions to ensure that the materials, information, and documents on its website, including but not limited to articles, newsletters, reports, and blogs (“Materials”), are accurate and complete. Nevertheless, these Materials are intended solely for general informational purposes and do not constitute legal advice. They may not necessarily reflect the current laws or regulations. The Materials should not be interpreted as legal advice on any specific matter. Furthermore, the content and interpretation of the Materials and the laws discussed within are subject to change.
