Carroll & O'Dea Facebook

When it matters,
you need the
right commercial advice

Contact Us

Publications

Avoiding Legal Landmines in AI Usage by your staff

Avoiding Legal Landmines in AI Usage by your staff

Published on February 3, 2025 by Yue Lucy Han and Selwyn Black

Imagine handing over sensitive documents to a stranger you’ve found online without proper checking, asking them to write a report for which they take no responsibility, and then issuing their report as your own without copyright license or thorough review. You wouldn’t do that, would you? Use of AI can be the same!

Before allowing employees to continue to freely use generative AI (like ChatGPT), it is crucial to consider the potential risks and take appropriate measures to mitigate them.

Firstly, it is challenging to monitor the information inserted or uploaded into generative AI. As the volume of data increases, ensuring that all personal or confidential information is removed becomes increasingly difficult. This raises the risk of inadvertent disclosure and potential reidentification when generative AI has enough data points. A free version of some generative AI collects data to train its model, which raises concerns over losing control of your data and its disposal.

Moreover, the outputs generated by a generative AI may contain “hallucinations” that require significant human oversight. Also, personal information may be generated by a generative AI and collected by the entity, increasing your data protection obligations. Even false information may still be personal information about an individual, further complicating privacy compliance.

To safeguard your organisation, we strongly recommend stopping the current usage of non-enterprise specific-generative AI until you have taken the following steps:

  1. Define: Clearly define the distinct use case or situation where your employees may wish to use a generative AI. This will help you understand the risks involved in using an AI product. For example, using a generative AI to generate a thank you email, or job ad may be a lower risk than using it to generate reports for business decision making.
  2. Due Diligence: Conduct due diligence on commercially available AI solutions for the identified use case. The Office of the Australian Information Commissioner (OAIC) recently released its guidance on the privacy considerations when evaluating commercially available AI solutions. You can read about the Guidance on privacy and developing and training generative AI models | OAIC. There are some enterprise specific solutions which may be contracted to only draw on and feed to the enterprise.
  3. Privacy Impact Assessment: Conduct a privacy impact assessment for the chosen AI solution and the specific use case.
  4. AI Policies: Develop and adopt an emerging technology policy, a general generative AI policy, and a specific generative AI policy for high-risk teams.
  5. AI Training Sessions: Run training sessions on the appropriate usage and the risks associated with Generative AI.
  6. Monitor: Implement technical controls to manage and monitor access to generative AI solutions.

We can provide quotes for legal advice on several of these steps, as you look to balance productivity and risk.

We look forward to working with organisations to help them leverage the power of AI and avoid legal traps.

Disclosure and important note: This article is based on our own legal research and thinking. Some of its content has been drafted with the assistance of artificial intelligence. The authors have checked and approved this article, including the AI generated content, for publication.

Please note that this article does not constitute legal advice. If you are seeking professional advice on any legal matters, you can contact Carroll & O’Dea Lawyers on 1800 059 278 or via our Contact Page and one of our lawyers will be able to assist you.

Need help? Contact us now.

We're here to help. For general enquiries email or call 1800 059 278.
For Business lawyers call +61 (02) 9291 7100.

Contact Us