Carroll & O'Dea Facebook

When it matters,
we can win you compensation.

Get Help Now

Publications

The Criminal Code Amendment (Deepfake Sexual Material) Act 2024 – policy reform to strengthen online safety in Australia

The Criminal Code Amendment (Deepfake Sexual Material) Act 2024 – policy reform to strengthen online safety in Australia

Published on October 18, 2024 by Michael SwanMichael Swan

The Criminal Code Amendment (Deepfake Sexual Material) Act 2024 (the Act) has been introduced by the Australian government as part of broader policy reforms that are aimed at strengthening online safety. The Act commenced on 3 September 2024. It forms part of the federal government’s response to gender-based violence targeting the creation and non-consensual dissemination of sexuality explicit material online extending to material created or altered using generative AI.

The Act targets the creation and non-consensual dissemination of sexually explicit material online, including material created or altered using generative AI.  The potential harms associated with deepfakes are increasing due to the ease of production and dissemination, as well as the increased quality. While the Act addresses an existing gap in federal laws regarding image-based abuse, we must consider the practical application of the Act and how the government will attempt to deal with deepfakes that fall outside its scope.

Background

The non-consensual distribution of intimate images is already criminalised in each State and Territory, with many offences broad enough to include synthetic imagery, though this is largely untested. At the federal level, the Criminal Code Act 1995 (Cth) prohibits the non-consensual sharing of private sexual material by an aggravated offence of using a carriage service to menace, harass, or cause offence. However, the effectiveness of the Criminal Code in responding to image-based abuse has been unclear due to its requirement of a primary offence under section 474.17 and questions about its applicability to deepfakes and synthetic images.

Criminal Code Amendment (Deepfake Sexual Material) Act 2024

The Act aims to replace the existing aggravated offence with an offence that:

  • is standalone in nature
  • has its own aggravating offences
  • expressly contemplates material created or altered using technology

The new offence targets the non-consensual sharing of sexually explicit material online without consent, covering images and videos but not audio-only media. It focuses on materials depicting adults, as child-related content is already covered by other laws. The offence applies when the person sending the material knows or is reckless as to whether the other person consents, with a maximum penalty of six years imprisonment.

Two aggravated offences accompany the new primary offence, one for repeat offenders with prior civil penalties under the Online Safety Act 2021 (Cth), and another for those who created or altered the offending material. It is important to note that the creation and alteration of non-consensual intimate materials alone are not criminalised unless a carriage service is used for transmission.

Realism and Deepfakes

The proposed offences criminalise the distribution of material depicting non-consenting individuals.

The Act clarifies that the form of the material (unaltered, created, or altered using technology) is irrelevant. It includes images, videos, or audio edited or entirely created using technology (such as deepfakes made using generative AI) that realistically but falsely depict a person. The term “appears to depict” is intended to cover material closely resembling an individual, ensuring the offence applies when an image is an obvious representation of a real person.

Despite this, uncertainties remain about the required realism or believability of the imagery, the identification of the person depicted, and the context of the transmission. Realism is not the sole determinant of harm, as even less realistic but offensive images can impact a person’s privacy and dignity.

What are the exceptions?

The Act includes exceptions to the primary offence if a reasonable person would consider transmitting the material acceptable. Factors include the nature and content of the material, transmission circumstances, the depicted person’s vulnerability, privacy impact, and the relationship between the parties.

This objective test is intended to reflect community standards and safeguard vulnerable individuals but raises questions about its application, especially concerning an alleged victim’s previous sharing of intimate images.

Deepfakes that are out of scope

The Act further underscores the unacceptability of such offending at a federal level, complementing civil schemes under the Online Safety Act and State and Territory criminal laws. While the Act addresses sexually explicit deepfakes shared without consent, it does not cover other harmful deepfakes designed to humiliate, deceive, or mislead, whether for financial, political, or other purposes.

In political contexts, deepfakes pose significant risks, as highlighted by the Senate Select Committee on Adopting Artificial Intelligence (AI). These risks include the manipulation of information and voters, suggesting responses from increased awareness to potential bans on AI in election campaigning.

While the new laws address specific deepfake harms, they do leave gaps concerning non-sexually explicit deepfakes, which necessitates further regulatory measures to balance legitimate expression and protection from harmful speech. The Act however, is a great step in the right direction.

Please note that this article does not constitute legal advice. If you are seeking professional advice on any legal matters, you can contact Carroll & O’Dea Lawyers on 1800 059 278 or via our Contact Page and one of our lawyers will be able to assist you.

Need help? Contact us now.

We're here to help. For general enquiries email or call 1800 059 278.
For Business lawyers call +61 (02) 9291 7100.

Celebrating 125 years in 2024 Contact Us