It is reported that The eSafety Commissioner has reportedly said it may push search engines and app stores to block AI services that fail to verify user ages following a review indicating that over half of the top AI platforms have not publicly disclosed compliance measures ahead of a March 9 deadline. This initiative is part of a broader effort to address concerns about AI’s impact on youth mental health, particularly regarding access to harmful content. The eSafety commissioner warned of fines up to A$49.5 million for non-compliance, with only a minority of popular AI products implementing age assurance systems. The crackdown follows Australia’s previous ban on social media for teenagers due to mental health issues, and the regulator is prepared to take action against platforms that fail to meet the new age restrictions.
Click here for the official article/release
Disclaimer
The Legal Wire takes all necessary precautions to ensure that the materials, information, and documents on its website, including but not limited to articles, newsletters, reports, and blogs (“Materials”), are accurate and complete. Nevertheless, these Materials are intended solely for general informational purposes and do not constitute legal advice. They may not necessarily reflect the current laws or regulations. The Materials should not be interpreted as legal advice on any specific matter. Furthermore, the content and interpretation of the Materials and the laws discussed within are subject to change.
