AI safety institutes test Google and Mistral AI models for language, cultural risks

It is reported that the AI safety institutes of the UK, Japan, Singapore and South Korea have been jointly assessing AI models developed by Google and Mistral AI to evaluate their security against vulnerabilities related to linguistic and cultural differences. In particular, the institutes are testing (1) whether the models, trained primarily on English-language data and designed to generate outputs mainly in English, may be vulnerable to various risks; and (2) the security capabilities of the models against cyber threats originating from, using, or connected to non-English languages or non-Western cultural contexts.

Click here for the official article/release

Disclaimer

The Legal Wire takes all necessary precautions to ensure that the materials, information, and documents on its website, including but not limited to articles, newsletters, reports, and blogs (“Materials”), are accurate and complete. Nevertheless, these Materials are intended solely for general informational purposes and do not constitute legal advice. They may not necessarily reflect the current laws or regulations. The Materials should not be interpreted as legal advice on any specific matter. Furthermore, the content and interpretation of the Materials and the laws discussed within are subject to change.

AI was used to generate part or all of this content - more information

Also Read:  Delhi High Court denies early hearing to plea for ban on DeepSeek