The Government released a cabinet paper “Approach to work on Artificial Intelligence” on 25 July 2024, which outlines a proposed strategic approach to artificial intelligence (AI) in Aotearoa New Zealand.

This article provides an overview of New Zealand’s proposed regulatory approach and what it might mean for businesses, together with a snapshot of the different regulatory approaches taken by some of our key trading partners.

What you need to know

Cabinet has placed its focus on the following five key domains:

  • Setting a strategic approach to AI
  • Enabling safe AI innovation in the public service
  • Harnessing AI in the New Zealand economy (with MBIE tasked with formulating AI guidance for firms to utilise)
  • Prioritising engagement on international rules and norms; and
  • Coordinating with work on national security.

The cabinet paper advocates for New Zealand taking a “light touch, proportionate and risk-based” approach to regulating AI, utilising existing legal frameworks rather than introducing bespoke AI laws.

This regulatory approach can be contrasted with the approaches taken by many overseas jurisdictions which have passed, or are in the process of passing, specialised laws to regulate AI technologies. This includes Australia, where in early September the Australian Government opened a public consultation on its proposal for mandatory guardrails to be adopted by Australian organisations in developing or deploying high risk AI systems, and a voluntary safety standard that can be applied to any risk setting.

New Zealand’s regulatory approach

New Zealand does not currently have specific laws that regulate the use or deployment of AI. Instead, AI is regulated through existing technology-neutral laws covering the likes of privacy, consumer protection, intellectual property and human rights.

To date, no AI legislation has been tabled in NZ, and the cabinet paper confirms that a general-purpose AI law should not be expected any time soon. The cabinet paper expresses concern that introducing a broad AI law may harm productivity and innovation, and instead suggests that:

Regulatory intervention should only be considered to unlock innovation or address acute risks. If regulatory intervention is needed, we should leverage existing regulatory mechanisms, preference agile options, and draw on international actions, rather than developing a standalone AI Act.”

It therefore appears that Aotearoa will continue to rely on existing laws that are updated to address AI harms as and when needed. A range of regulatory options may be used to address risks, such as voluntary guidance, industry codes, technical standards, and audit requirements. In fact, we are already seeing regulators and industry bodies in New Zealand make use of these agile methods to help fill perceived gaps in our legislative approach. The Office of the Privacy Commissioner (OPC) has been particularly active in this regard, releasing guidance on privacy and AI and an exposure draft of a Biometric Code of Practice. You can read more about the proposed Code in our FYI here.

Emerging international norms

The cabinet paper notes that New Zealand will need to stay connected to key international discussions that are establishing global norms for AI and AI safety, and suggests that New Zealand should promote the following OECD AI Principles as a key direction for our approach to responsible AI:

  • Inclusive growth, sustainable development, and wellbeing
  • Respect for the rule of law, human rights, and democratic values, including fairness and privacy
  • Transparency and explainability
  • Robustness, security, and safety; and
  • Accountability.

Other relevant international norms might include New Zealand signing up to the Bletchley Declaration on AI Safety. The Declaration has been signed by 29 countries (including Australia, the UK, Canada, US and a number of European countries). It records a collective commitment to identifying AI safety risks of shared concern and building a shared scientific understanding of these risks. The signatory countries agreed to build risk-based policies across countries to ensure safety in light of such risks and to collaborate across jurisdictions as appropriate.

EU

New Zealand’s regulatory approach can be contrasted with that of the European Union, whose world-first horizontal and standalone law governing the development and deployment of AI came into force on 1 August 2024. The EU AI Act takes a risk-based approach to regulating AI, with some AI applications being banned altogether and other ‘high risk’ AI being subject to a range of stringent compliance obligations. See our article on the AI Act here for more information.

Australia

Similar to New Zealand, Australia does not currently have any specific laws that directly regulate AI and is instead relying on existing legislation with supporting voluntary frameworks and guiding principles. One of these frameworks is the Federal Government’s eight voluntary principles to promote safe, secure and reliable AI systems. 

In early September the Australian Federal Government released a proposal for 10 guardrails to be adopted by organisations in developing or deploying AI systems. The guardrails are largely focussed on testing, transparency and accountability, and would be mandatory where AI is ‘high-risk’. High-risk would include highly capable general purpose AI models as well as AI where the known or foreseeable use case poses a significant risk of adverse consequences. The consultation outlines several forms that the mandatory guidelines might take, which may include a new Australian AI Act.

The consultation is open until 4 October 2024. More information on the proposals can be found here.

UK

New Zealand’s proposed regulatory approach largely aligns with the UK’s current non-statutory regulatory framework for AI. In March 2023, the Government issued a White Paper outlining five key principles for AI regulation (being safety, security and robustness, transparency and explainability, fairness, accountability and governance, and contestability and redress). The principles are not expected to become legislation, but instead will be issued on a non-statutory basis and implemented by existing regulators. Following an initial period of implementation, a statutory duty on regulators to consider these principles may be introduced.

US

There is no comprehensive federal legislation or regulations in the US currently that regulate the development or deployment of AI. However, various frameworks and guidelines exist, such as President Biden’s Executive Order on the Safe, Secure and Trustworthy Development and Use of Artificial Intelligence  and the Blueprint for an AI Bill of Rights which provides (non-binding) guidelines to enable the implementation of safe and secure AI systems. A high-profile US Senate working group has also released a roadmap for AI that bridges the gap between the approach of the US and the EU.

There are some proposed federal laws in the pipeline regulating specific AI harms, including the Disrupt Explicit Forged Images and Non-Consensual Edits Act (DEFIANCE). If passed into law, DEFIANCE would provide individuals with the right to sue for damages caused by nonconsensual, sexually explicit deepfakes generated by AI. DEFIANCE was introduced in the wake of numerous AI-generated, explicit images of pop star Taylor Swift being spread across the internet.

There are also a number of legislative initiatives at a state level, including for example in the states of Colorado and Connecticut.

China

China has introduced several pieces of legislation targeted at specific technology systems, including algorithms, deepfakes and generative AI. In June 2023, China’s State Council announced that a comprehensive AI law is on the legislative agenda, indicating their approach to AI regulation may follow the EU’s. 

Singapore

The Singapore Government developed a voluntary Model Framework in 2019, with the Second Edition released in 2020, to provide guidance to private sector organisations on ethical and governance issues to consider when deploying AI. In addition, the Government has developed AI Verify, a toolkit to support testing and oversight of proposed AI systems to allow developers and owners to demonstrate compliance with the Model Framework through standardised testing.

Concluding remarks

The news that broad AI regulation is not on the horizon in New Zealand will no doubt be welcomed by the technology community, and help promote New Zealand internationally as a pro-innovation jurisdiction. However, it will do little to address growing public concerns that our current laws are not well suited to address some of the novel risks presented by AI. It also raises questions about how Aotearoa will bridge the growing gap between its regulatory approach and that of our closest trading partners - particularly the EU and Australia (if it pushes forward with its AI legislative agenda).

As the differing regulatory approaches taken globally illustrate, there is no ‘one size fits all’ approach when it comes to AI regulation, though some global norms (eg around AI safety) are already starting to emerge.  

Organisations will be well advised to keep an eye on global developments and emerging best practices.

Get in touch

If you have any questions, please get in touch with one of our experts.

Special thanks to Bridget Lautour for her assistance in writing this article.

Contacts

Related Articles