Understanding Artificial Intelligence Regulation

AI presents an unprecedented opportunity for the Australian economy. The Tech Council has estimated that Generative AI alone could contribute $45-$115 billion per year to the country’s economy by 2030. Innovation in this space is rapid, but in most countries, the legal framework that supports AI ethics is lacking.

 

Understanding the way artificial intelligence and law interact can necessitate a complex understanding of artificial intelligence, the inherent risks, and the laws that impact its use.

 

Request Consultant

Government working updating AI regulations

With deep expertise at the intersection of AI, data science, and regulatory compliance, AI Consulting Group can help navigate evolving AI laws. We support organisations with everything from strategy and implementation to artificial intelligence regulation, governance and risk assessments, offering clarity in what can often be a complex and rapidly shifting space.

We have a skilled team with specialised capabilities and knowledge of AI regulation in Australia, and we’re proud to serve customers in Australia and abroad—from some of the largest global clients to smaller grassroots businesses.

The Artificial Intelligence Act: Different rules for different risk levels

In April 2021, the European Commission introduced the European Union’s first proposed legislation for AI regulation, setting out a risk-based framework for regulating AI.

As far as AI in the law, this was a worldwide first. Under this approach, AI systems are assessed and categorised based on the level of risk they pose to users. The higher the risk, the stricter the compliance obligations that apply.

Unacceptable

Some types of AI are outright banned under the EU’s proposed laws — especially those that could seriously impact people’s rights or wellbeing.

AI that manipulates human behaviour, like voice-activated toys that might encourage risky behaviour in children, won’t be allowed. The same goes for ‘social scoring’, in which AI is used to rank people based on behaviour, income, or background.

There are also strict bans on using AI for biometric identification or categorisation, especially in real-time and remote situations like facial recognition in public places.

That said, there are a few exceptions for law enforcement. In very serious cases, authorities may be permitted to use real-time biometric identification—but only under strict conditions.

Post-biometric identification, in which analysis happens after the fact, may also be used to investigate major crimes, but only with court approval.

High

AI systems that could impact people’s safety or fundamental rights are considered high-risk under the EU’s proposed rules—and those systems will face much tighter oversight. These high-risk systems fall into two main groups.

First, there are AI systems that are built into products already covered by EU product safety laws—products like toys, cars, medical devices, lifts, and aviation equipment.
The second category covers specific areas where AI could seriously impact people’s lives.

This category includes managing critical infrastructure, education and job recruitment, access to public services or social benefits, law enforcement, immigration, and legal decision-making. Any AI systems in these areas will need to be registered in an official EU database.

To keep things transparent, all high-risk AI systems will be carefully checked both before they hit the market and during their use over time. And importantly, people will have the right to raise concerns or complaints about these systems with the appropriate national authorities—so there’s a clear path for accountability.

Medium

In between the high-risk and low-risk categories, there’s what’s generally referred to as medium-risk AI. These are AI systems that don’t pose serious threats to safety or fundamental rights, but still carry some potential for harm or misuse.

Medium risk covers technology like AI-powered chatbots, productivity tools, or recommendation engines—systems that interact with users or influence decisions, but don’t directly affect health, finances, or legal rights. While they’re not banned or heavily restricted, they still need to follow some transparency rules.

For example, if you’re interacting with an AI chatbot, it should be made clear to you that you’re not talking to a human. And if an AI is generating deep fakes or synthetic content, it should include a label stating so.

This gives users enough information to make educated choices, even if the risk is relatively low.

Transparency requirements

According to the proposed EU legislation, generative AI, like ChatGPT, will not be classified as high-risk, but will have to comply with transparency requirements.

These transparency requirements are based on EU copyright law, and include :

  • disclosing when content has been generated by AI,
  • designing the model to prevent the generation of illegal content,
  • and publishing summaries of copyrighted data used for training.

Other laws impacting AI use

While there is movement in this space, it’s early days and as Australian lawmakers develop their knowledge of how AI can and will be used, legislation is likely to be developed.

Currently, the way data is handled, including by AI tools, is still guided by the 1988 Privacy Act

However, here’s a rough timeline of the legislation so far :

  • § 2021 : AI Ethics Principles introduced
    The Australian Government publishes voluntary AI Ethics Principles to encourage responsible and safe use of AI.
  • § 2022 : Initial policy work begins
    The Department of Industry releases the Safe and Responsible AI in Australia discussion paper, opening the door for formal consultation.
  • § January 2023 : Public consultation launched
    The government calls for submissions on AI risks, regulatory gaps, and how to shape future laws.
  • § January 2024 : Interim response published
    The Government acknowledges that existing laws don’t fully address high-risk AI use, and signals intent to move toward a risk-based framework (similar to the EU’s AI Act) with mandatory safeguards
  • § September 2024 : Voluntary AI Safety Standard
    A Voluntary AI Safety Standard was introduced with immediate effect, providing practical guidance for businesses where their use is high risk, so they can start implementing best practices ahead of formal Australian AI regulation.
  • § Forecast : Regulations come into effect
    If adopted, new AI regulations may roll out, especially targeting high-risk applications like facial recognition, automated decision-making, and biometric data use.

Core issues that the AI regulations address

We can safely expect that future AI regulation Australia will focus on key areas like transparency in decision-making, bias and fairness in automated systems, and accountability when businesses rely on AI-driven outcomes.

As AI becomes more integrated into everyday operations—from hiring processes to customer service and financial forecasting—it’s crucial to ensure these tools are trustworthy. Certification and ongoing evaluation of AI systems will likely become a standard requirement, and focus on data protection, fairness and equity, and transparency.

Data Protection :

AI excels at processing and distilling a large amount of data, but it’s still essential that confidential and personal data be protected.

AI regulation laws are expected to reinforce strict data protection standards, ensuring that any information processed or analysed by AI is handled lawfully, and with informed consent.

Any AI policy Australia will likely align closely with existing privacy laws, like the 1988 Privacy Act.

Fairness and Equity :

AI tools must not reinforce existing biases or create unfair outcomes—this is especially important in law industries.

Regulation on AI aims to ensure that the technology is trained on diverse, representative datasets and are regularly audited to prevent discrimination.

Transparency :

One of the most important expected principles of AI laws Australia is transparency. Future regulation is likely to require clear disclosure on when and how AI tools have been used, along with explainable outputs so professionals can interpret AI-generated recommendations with confidence.

Key compliance requirements

  • Risk assessment and classification : AI systems must be assessed and classified based on the level of risk they pose.
  • Transparency and disclosure : Users must be informed when they’re interacting with or affected by AI.
  • Data privacy and security : AI tools must comply with data protection laws, ensuring all personal and sensitive data is securely handled and lawfully processed.
  • Bias monitoring and fairness testing : Regular testing must be conducted to identify and mitigate bias in algorithms, especially those influencing legal outcomes.
  • Human oversight : AI systems must allow for meaningful human oversight—AI cannot replace professional judgment and should support, not substitute, human expertise.
  • Explainability : AI decisions and recommendations must be explainable and interpretable by professionals, not just technical developers.
  • Accuracy and performance monitoring : Ongoing monitoring is required to ensure that AI continues to perform accurately and effectively over time.
  • Accountability and liability : Clear lines of accountability must be established, both for the developers of the AI and the industries who rely on it.
  • Registration and documentation : High-risk legal AI tools may need to be registered in a public or regulatory database, with detailed documentation available on how they function.
  • User training and education : Those industries using AI tools must dispense adequate training so employees can understand the capabilities, limitations, and ethical use of these technologies.

AI is evolving rapidly, and so are the legal and ethical questions that come with it. At AI Consulting Group, we don’t just deliver AI solutions; we help organisations understand the laws, risks, and responsibilities tied to deploying artificial intelligence.

Whether you’re exploring automation, data analysis, or generative AI, our team ensures your systems are not only powerful and effective but also aligned with current and emerging legal standards.

We work alongside our clients to design and implement scalable AI technologies with compliance and transparency in mind from day one so that when a legal framework is inevitably introduced, our clients are prepared.

FAQs

1. Does Australia have AI regulation?

No, not as of April 2025, however it is likely coming soon. Currently, Australia has published an interim response, acknowledging the gaps in legislation surrounding AI, specifically concerning high-risk areas like healthcare, law enforcement, or automated decision-making.

This was followed by a Voluntary AI Safety Standard, which was introduced in September 2024. While not law, the Voluntary AI Safety Standard gives a fair indication of what Australian AI legislation might look like in the future.

The standard sets out principles around safety, accountability, transparency, and fairness, and encourages users to act responsibly while formal laws are still in development.

For now, AI legislation under active development. Formal regulation will likely be set in stone in the next few years.

2. Why is AI regulation important?

AI regulation ensures the safe and effective implementation of AI. In most cases, this emphasises transparency, data protection, and fairness and equity.

AI can be a highly useful tool, but the way we but the way that we design and deploy it matters. Without clear guidelines, AI systems can unintentionally cause harm, especially in legal spaces—through biased decision-making, privacy breaches, or lack of accountability.

Regulation helps build public trust, encourages responsible innovation, and ensures that AI benefits everyone.

3. Who regulates AI?

Currently, AI regulation is handled differently across the world, depending on the country or region. In Australia, there’s no single AI regulator yet, but the government is actively exploring how to oversee AI more effectively.

For now, responsibility is shared across various existing bodies, including the Office of the Australian Information Commissioner (OAIC) for data privacy, and industry-specific regulators (like ASIC for financial services, or TGA for medical technologies).

Globally, the European Union is leading the charge with the AI Act, which introduces a centralised, risk-based regulatory framework.

4. Has the AI Act been passed?

Yes, the European Parliament voted to adopt the AI Act in March 2024, and will become fully effective from August 2, 2026. The AI Act is the first comprehensive law in the world specifically regulating artificial intelligence, and introduced a risk-based approach to AI regulation.

As part of the AI Act, AI systems are categorised as unacceptable, high, medium, or low risk—with stricter rules for those that could significantly impact rights and safety, like healthcare and law industries.

However, while the Act has been passed, it won’t come into full force right away. Most high-risk system requirements will apply from mid to late 2026, giving businesses time to adapt.

5. Who has recently passed laws to regulate artificial intelligence?

The EU was the first, and so far the only, governing body to respond to the prevalence of AI with legislation.

The landmark AI Act, passed in March 2024, set a global precedent by introducing a risk-based framework for regulating AI systems.

While other countries, including Australia, the U.S, Canada, and the U.K, are exploring regulation framework, none have yet enacted legislation.

6. What is the EU AI Act Australia?

There is currently no Australian version of the EU’s AI Act. The EU AI Act is legislation passed by the European Union in 2024 to regulate artificial intelligence across its member states.

It’s the world’s first comprehensive AI law, designed to manage risks and ensure AI is used safely, transparently, and ethically. Australia has yet to pass similar legislation, but Australia’s Voluntary AI Safety Standard, released in 2024, drew inspiration from the EU’s risk-based framework.

This Safety Standard, while not law, was created to encourage responsible AI development while full legislation is still being considered.