Victoria’s Department of Families, Fairness, and Housing (DFFH) has been directed to ban the use of generative AI tools after a child protection worker used ChatGPT to draft a report submitted to the Children’s Court. The report, which contained sensitive information about a child at risk, was found to be inaccurate and in violation of the state’s privacy rules, according to the Office of the Victorian Information Commissioner (OVIC) .
The state’s information commissioner reported that the use of ChatGPT in this case had “downplayed the risks to the child,” although it did not alter the final outcome of the case . However, the misuse of AI in drafting child protection reports has raised alarms about the potential harm such technologies can cause when used without proper oversight.
AI’s Role in a Sensitive Case
The report, submitted to the court, was supposed to include the child protection worker’s assessment of the risks and needs of a young child whose parents were charged with sexual offenses, although the offenses did not involve the child. Instead, the worker used ChatGPT to generate parts of the report, entering “personal and sensitive” information into the AI tool—an act that violated privacy regulations and potentially jeopardized the child’s welfare .
“OpenAI now holds that information and can determine how it is further used and disclosed,” OVIC stated in its investigation. The disclosure of sensitive information to OpenAI, an overseas entity, without the department’s control, constituted a serious breach of privacy .
The investigation also revealed several indicators of ChatGPT’s involvement, from inaccuracies in the personal details of the case to inappropriate language and sentence structure that deviated from employee training and child protection guidelines .
Broader Use of ChatGPT in Child Protection
This case was not an isolated incident. OVIC’s investigation uncovered that the case worker may have used ChatGPT in up to 100 other child protection cases over the course of a year . Furthermore, during the latter half of 2023, nearly 900 DFFH employees—representing 13% of the department’s workforce—had accessed ChatGPT without receiving any specific training or guidance on the ethical use of generative AI .
Despite the department’s claims that the use of AI in sensitive work was limited, OVIC rejected this assertion. The lack of proper training and oversight over the use of generative AI tools like ChatGPT left room for significant privacy violations .
Ban and New Compliance Measures
In response to the investigation, OVIC has issued a compliance notice to DFFH, requiring the department to block access to generative AI tools across its internal systems by November 5 . The list of banned AI tools includes ChatGPT, Claude, Meta AI, Grammarly, and Microsoft 365 Copilot, among others .
While DFFH has accepted the findings and committed to implementing the required changes, the case has exposed a gap in how generative AI technologies are managed in government departments handling sensitive information. The case worker involved is no longer employed by the department .
The department must now focus on ensuring that AI use in sensitive work environments is properly controlled and aligned with privacy standards. As OVIC highlighted, this incident underscores the need for rigorous oversight and training when it comes to AI use in public service, especially in fields as sensitive as child protection.