X, formerly known as Twitter, announced a significant update to its privacy policy, now permitting third-party collaborators to train their AI models using user data unless users opt out. This move signals X’s interest in licensing data to AI firms, following trends seen with Reddit and media companies.
The updated policy, which takes effect on November 15, states:
Third-party collaborators. Depending on your settings, or if you decide to share your data, we may share or disclose your information with third parties. If you do not opt out, in some instances the recipients of the information may use it for their own independent purposes in addition to those stated in X’s Privacy Policy, including, for example, to train their artificial intelligence models, whether generative or otherwise.
However, the current settings page on X does not yet specify where users can disable data-sharing with these third parties. Presently, users can toggle data-sharing with xAI’s Grok AI and other “business partners” under the “Privacy and Safety” section, but the policy implies further options will be available upon its activation.
The revision also overhauls data retention policies. Previously, X stated it would keep user information for a maximum of 18 months. Now, it claims to retain data for varying periods based on service needs and legal obligations, explaining:
“We keep your profile information and content for the duration of your account.“
X also warns that even deleted content may persist through search engines and external platforms. This update aligns with earlier controversy, where Musk’s xAI used platform data to train Grok, drawing regulatory scrutiny from the EU.
These changes reflect the growing tension between data privacy and the demand for AI development, underscoring the importance of clear user controls and regulatory oversight.