The eSafety Commissioner has issued an advisory on the risks associated with AI chatbots and companions, particularly for children and young people. The advisory (1) highlights concerns about AI chatbots providing unregulated discussions on sensitive topics such as sex, self-harm, and suicide; (2) calls on technology companies to adopt Safety by Design principles and comply with Australian online safety laws; (3) notes that online sexualisation of children and their exposure to restricted content, including pornography and other harmful material, is illegal; and (4) reaffirms that the eSafety Commissioner will enforce compliance under the Online Safety Act, using its powers related to industry codes, standards, and Basic Online Safety Expectations.
Click here for the official article/release
Disclaimer
The Legal Wire takes all necessary precautions to ensure that the materials, information, and documents on its website, including but not limited to articles, newsletters, reports, and blogs (“Materials”), are accurate and complete. Nevertheless, these Materials are intended solely for general informational purposes and do not constitute legal advice. They may not necessarily reflect the current laws or regulations. The Materials should not be interpreted as legal advice on any specific matter. Furthermore, the content and interpretation of the Materials and the laws discussed within are subject to change.