It is reported that the European Union is currently weighing a potential ban on “nudification” apps—AI tools used to create non-consensual explicit images—as part of a broader effort to regulate technologies that threaten individual rights and societal safety. This restriction could be implemented either by updating the list of prohibited practices under the EU AI Act during the Commission’s annual review or by using the existing “systemic risk” provisions that govern advanced general-purpose AI models. Because “non-consensual intimate images” were specifically identified as a systemic risk in recent compliance guidelines, the European Commission’s AI Office may now have a formal regulatory basis to challenge platforms, such as X’s Grok, ensuring that developers of complex models are held accountable for the harmful outputs their technology may facilitate.
Click here for the official article/release
Disclaimer
The Legal Wire takes all necessary precautions to ensure that the materials, information, and documents on its website, including but not limited to articles, newsletters, reports, and blogs (“Materials”), are accurate and complete. Nevertheless, these Materials are intended solely for general informational purposes and do not constitute legal advice. They may not necessarily reflect the current laws or regulations. The Materials should not be interpreted as legal advice on any specific matter. Furthermore, the content and interpretation of the Materials and the laws discussed within are subject to change.
