The National Institutes of Health (NIH) has issued a request for public input on strategies to mitigate the risk of controlled-access human genomic data leakage when developing and sharing generative AI tools, while still enabling responsible innovation in biomedical research. Due to privacy concerns—such as the potential for generative AI models to memorize and inadvertently disclose sensitive genomic data—the NIH has temporarily paused the sharing and retention of generative AI models trained on controlled-access genomic data. NIH seeks feedback from researchers, developers, institutions, and the public on effective privacy-preserving methods and policies. This initiative aims to ensure NIH policies remain aligned with advances in AI technology and uphold participant privacy protections. The consultation closes on 16 July 2025.
Click here for the official article/release
Disclaimer
The Legal Wire takes all necessary precautions to ensure that the materials, information, and documents on its website, including but not limited to articles, newsletters, reports, and blogs (“Materials”), are accurate and complete. Nevertheless, these Materials are intended solely for general informational purposes and do not constitute legal advice. They may not necessarily reflect the current laws or regulations. The Materials should not be interpreted as legal advice on any specific matter. Furthermore, the content and interpretation of the Materials and the laws discussed within are subject to change.