15/10/2024·4 mins to read
Workplace use of GenAI breaches privacy laws
A recent investigation by the Victorian privacy regulator has resulted in a ban on the use of ChatGPT in a government department. The findings highlight that while GenAI tools offer significant benefits in the workplace, they also carry risks without robust policies, training and education on their use.
ChatGPT used to draft a child protection report
The investigation by the Office of the Victorian Information Commissioner (OVIC) centred on a Protection Application Report (PA Report) prepared by a Child Protection worker, employed by the Department of Families, Fairness and Housing (DFFH). The report was submitted to the Victorian Children’s Court in proceedings against parents charged with sexual offending.
After the report was submitted, a DFFH legal representative reviewed it and noted that its language was overly sophisticated, complex and descriptive. More concerningly, the report included inaccurate information. For example, there were inconsistent references to a child’s doll:
- In the first instance, the PA Report referred to the child’s father using the doll for sexual purposes.
- The PA Report later referred to the same doll as an example of the parents providing the child with “age appropriate toys”, used to underscore their efforts to nurture the child’s development needs.
As a result, the report inaccurately presented what should be an indicator of risk to the child, as an indication of positive caregiving capacity of the parents. This downplayed the severity of the actual or potential harm to the child.
The DFFH representative reported their concerns to its Child Protection division, which investigated. It concluded that the report’s author had used the free version of ChatGPT to draft the report. DFFH withdrew the PA Report from the Court and notified OVIC about a potential privacy breach.
What was the privacy concern?
The concerns were twofold.
The first related to the release of sensitive personal information to ChatGPT. When information is inputted into the free version of ChatGPT, it is disclosed to the tool’s owner, OpenAI (an offshore company). OpenAI can then decide how to use the information, for example by training ChatGPT or sharing the information with third parties.[1]
In this case a significant amount of personal and delicate information was entered into ChatGPT without permission. As a result, it was released to OpenAI, and it became outside the control of the Department.
The second concern was over the use in the Report of content generated by ChatGPT. It contained inaccurate personal information, in this case downplaying the risks to the child in the case.
OVIC finds breach of privacy
OVIC launched an investigation under the Privacy and Data Protection Act 2014 (Vic), which includes similar provisions to New Zealand’s Privacy Act in relation to the collection ,use and handling of personal information. The OVIC concluded that, in using ChatGPT to draft the PA Report, DFFH had breached the following Information Privacy Principles (IPPs):
- IPP 3.1 - by failing to take reasonable steps to mitigate risks that ChatGPT use would result in the collection, use and disclosure of inaccurate personal information; and
- IPP 4.1 - by failing to take reasonable steps to mitigate risks that ChatGPT use would result in the unauthorised disclosure of personal information.
OVIC was critical of the policies and protections that DFFH had in place to manage the risks of GenAI tools, finding they were not sufficient to ensure compliance with the IPPs. It issued Compliance Notices, requiring, amongst other things, DFFH to block Child Protection staff from accessing various GenAI tools including ChatGPT. It also required DFFH to regularly scan for similar GenAI tools and block access to them by Child Protection staff.
What does this mean for me?
Under New Zealand’s Privacy Act 2020, the same or similar outcome would likely have resulted:
- The IPPs breached by using ChatGPT have the following equivalents under New Zealand’s Privacy Act:
- IPP 8 which provides that agencies must not use or disclose personal information without taking reasonable steps to ensure that the information is accurate, up to date, complete, relevant, and not misleading.
- IPP 5 which provides that agencies must ensure that personal information is protected by reasonable security safeguards against loss, unauthorised access, modification or disclosure and other misuse.
- Like OVIC, the Office of the Privacy Commissioner has the power to issue Compliance Notices.
The key takeaway for New Zealand organisations is that robust, well thought-out policies, education and training on GenAI use are essential. Lip-service is not enough, as OVIC’s criticisms of the measures in place at the time the PA Report was drafted make clear:
- Education sessions on risks associated with GenAI, directed at managers and leaders only were not sufficient to educate the general workforce about how GenAI tools work and the privacy risks associated with them.
- There were no specific departmental rules in place about when and how these tools should or should not be used.
- There were no technical controls to restrict access to tools like ChatGPT.
OVIC was similarly underwhelmed by the additional measures adopted by DFFH after it reported the privacy breach:
- The directive style of the GenAI “dos and don’ts” issued to staff was watered-down by the overarching advice that staff must “conduct their own risk assessment which includes balancing any potential benefit and risks of using an GenAI tool”.
- The guidance lacks real-life examples or case studies of appropriate and inappropriate use of GenAI tools.
New Zealand organisations can derive high-level guidance about how to use GenAI in accordance with the Privacy Act from the OPC’s publication Artificial Intelligence and the IPPs, available here and the Government’s Generative AI Advice for the Public Sector, here.
Simpson Grierson’s privacy and data protection experts are here to assist organisations with policies and procedures concerning the use of GenAI in the workplace.
If you would like advice on these issues please get in touch. We would be delighted to assist.
[1] ChatGPT also offers paid versions of the software, which offers greater privacy protections over data inputted by users.