Artificial Intelligence & Law/Regulation in Australia and Internationally
The Trajectory to Date
Author: Nina Rossi | Date Published: 9 January 2024
Today’s topic is a discussion and comment on the role of law and regulation in the AI space, and what is currently underway in this regard. Whilst Australia has not yet adopted any specific laws or regulations in relation to AI, and in particular Generative AI, across the world the question of regulation in this space is at the front of many minds. The EU is perhaps the most advanced in this space and more discussion will be made on their proposed laws however it is considered important, if we are going to adopt increasingly the use of AI, that some measures are taken to protect those using.
As a term, AI has not been uniformly defined at this stage, however, the OECD in its article Recommendation of the Council on Artificial Intelligence, 2019, states that 'artificial intelligence system' means “software that is developed with [specific] techniques and approaches and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with”. This appears to be what we have experienced to date in regard to the general concept of AI.
For those less familiar there are a few iterations of AI out there, however, the AI used which includes Chat GPT and others is termed Generative AI, which is AI designed for the purpose of creating new content via, models or algorithms applied to text, photos, videos, code, data, or 3D renderings, from the vast amounts of data the AI is trained on. The models 'generate' new content by referring back to the data they have been trained on, making new predictions and over time have the capability of potentially learning so much such that humans can no longer provide it with new content. Unlike other iterations of AI, Generative AI is capable of creating unique and new content, even if others are capable of more complex tasks. There is a whole separate discussion that can be started in regard to how far we should allow such AI tools to learn and operate independently from humans and the potential impacts of this but for the purposes of this speech, let's assume AI will remain as a tool capable of being monitored and controlled by humans, despite its capacity.
It is Generative AI that is likely to have the most application in the world, for the majority of regular people and therefore has the potential to form both serious good and negative impacts. For this reason, it is the AI that has received the most focus from both business and government to date, or at least the one that is most spoken about in the media.
AI is however a challenging space to even consider regulation as it is ever-changing, by way of amplifying, perpetuating or exacerbating outcomes, potentially even those that are inequitable or undesirable for individuals and communities. This is the benefit but also the risk of this technology and hence the need to consider if and how to regulate AI.
What is Happening Internationally
EU
As mentioned, the EU, as is often the case, is considering and determined to become the first part of the world to implement any sort of law or regulation on the use of AI, and the manner which it uses information. The reason for this, as publicly stated, is that the EU wishes to implement better conditions for the development and use of AI technology. In summary, AI systems should be overseen by people, rather than by automation, to prevent harmful outcomes and hence the need for laws etc, as methods to control what can become a sentient entity on its own right.
The EU laws are, at this stage, to be guided on the basis of different risk levels being associated with the nature of the use of the AI and the application to a particular field. The intention is to apply these laws across all AI systems that are available on the market or in actual use in the jurisdiction. At this stage the three anticipated categories are:
- Unacceptable Risk
- High Risk
- Limited Risk
Unacceptable Risk
Things that are considered to be unacceptable risks are such things that may endanger humans and are noted by the European Parliament as consisting of AI technology or use that achieves the following outcomes:
- “Cognitive behavioural manipulation of people or specific vulnerable groups: for example voice-activated toys that encourage dangerous behaviour in children
- Social scoring: classifying people based on behaviour, socioeconomic status or personal characteristics
- Real-time and remote biometric identification systems, such as facial recognition”
Despite these however, there is already scope for biometric identification if there is a lapse in time between the event and the use of the information and subject to request being made by the Courts. These matters of Unacceptable Risk are intended to be in general, prohibited.
High Risk
The high-risk category is then subsequently determined based on a considered “negative” risk to human safety or fundamental rights, following market assessment. The EU Parliament has outlined the following as possible areas which may attract a high-risk rating:
“1) AI systems that are used in products falling under the EU’s product safety legislation. This includes toys, aviation, cars, medical devices and lifts.
2) AI systems falling into eight specific areas that will have to be registered in an EU database:
- Biometric identification and categorisation of natural persons
- Management and operation of critical infrastructure
- Education and vocational training
- Employment, worker management and access to self-employment
- Access to and enjoyment of essential private services and public services and benefits
- Law enforcement
- Migration, asylum and border control management
- Assistance in legal interpretation and application of the law.”
Where a high-risk use of AI may occur, the EU at this time is suggesting that measures will be required such as ensuring proper disclosure of materials etc that the content in question was created or generated by AI, as well as ensuring that proper technical measures are in place to prevent use and replication of data otherwise illegally obtained or subject to copyright or improperly referenced etc. These are particularly important for those in either creative fields or publishers etc where the question and concern over protection from AI of copyright materials etc is certainly real and only increasing at this time.
It is expected that those items deemed high risk are likely to be the subject of stringent regulation, especially in relation to risk management, testing, technical robustness, data training and data governance, transparency, human oversight, and cybersecurity.
Limited Risk
Other uses of AI will therefore be considered as presenting only 'limited risk' or Low Risk. Limited Risk AI use would therefore be subject to very light transparency obligations. Areas of AI that are identified at this stage as of limited risk include chatbots, emotion recognition systems, biometric categorisation systems, and AI systems that generate or manipulate image, audio or video content (i.e. deepfakes). Low-risk AI use will likely only require to adoption of nominated codes of conduct to guide.
Of otherwise particular importance for everyone here today, especially those that may develop, provide or use AI systems is that this EU law is intended to apply to even persons who are located in a third country, where the output produced by those systems is used in the EU. Enforcement will also be overseen by nominated government bodies, including the decision making and application of corrective measures for non compliance. Fines are also contemplated as a measure to be implemented as a penalty for noncompliance.
Similarly, based on such laws, developers and even users of open source AI products will need to be aware of the reasonably likely use of such products, including in areas considered as high risk, as the application of the law and requirements associated will be based on impact on individuals etc, not the nature of the technology.
USA
In other areas of the world, such as the United States of America (USA), though they have to date taken a lenient approach towards AI, there are now real calls for regulation and as such a blueprint Bill of Rights regarding AI, has been prepared. Naturally, with the USA being a country with a large tech industry, it is not surprising that they are less willing to jump into a regulatory, mandatory framework, which to date has been governed by voluntary standards and self-regulation. Nevertheless, likely as a result of increasing concerns with cyber security and the collection of private information, otherwise very loosely protected in the USA, there are now calls for more regulation in the AI space.
As a starting point at least, the White House has prepared a blueprint for an AI Bill of Rights, focusing on core factors that should guide the creation, use and operation of AI systems. Such factors include safety for users, proactive and effective protection from harm, prevention of discrimination, protection of privacy, knowledge and awareness of where AI is being used and finally retaining the right to opt-out and seek out human assistance as an alternative. It is particularly interesting to note here the prominence of privacy as a considered right, and good to see, especially noting the degradation of privacy, especially by the use and at the hands of big tech. How such measures will ultimately protect privacy and related rights is still to be seen. It is also of concern that this Bill of Rights appears to offer more like guidelines for the implementation and provision of AI tools, without any real and enforceable measures for preventing misuse and abuse. It is, for example, great to be potentially able to opt-out, but the question remains as to what can be done if that process is difficult and does not guarantee removal from other sites and tools which may share data with the AI.
In a similar light, the NIST AI Risk Management Framework (AI RMF) has been developed with the intention of voluntary adoption and use and is focused on improving the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems, rather than the actual use and experience by consumers in the marketplace. The aim here is again to address, before consumption by consumers, the risks and potentially negative harms that AI may cause, however, it is difficult, especially amongst the tech community, to reach a consensus on which risks are most important and most required to be addressed, leaving gaps in the measures that can be taken within these earlier stages and possibly gaps which could only be identified once AI was being used in the real world. This framework also adopts similar measures to ISO standards and so will also in the end allow for some risk tolerance.
It is however worth noting that the NIST Framework at the least argues for the consideration of factors, such as privacy and fairness in the adoption and implementation of AI systems.
China
In China, The Cyberspace Administration of China is also consulting on a proposal to regulate AI. In regard to the Chinese position, on 15 August 2023, a new law was proposed, considered a world first in AI regulation, where restrictions were implemented on the use of AI by companies, including in regard to the data used to prepare the AI system and the output created. Since then, however, iterative drafts of this law have seen the laws being watered down. For example, as highlighted in an article published on East Asia Forum on 27 September 2023, it has been noted that within the law there are “Requirements to act within a three-month period to rectify illegal content and to ensure that all training data and outputs are ‘truthful and accurate’ were removed.” The rules to be implemented have also now been noted to only apply to public facing AI systems, which limits effectiveness for protection. Though this does not mean that the laws in China are simply tokenistic, it remains unclear as to whether these laws will ultimately have substance and meaning within the broader realm of AI protection measures and achieve real outcomes. There are however some possible aspects to these laws that other countries may learn from and adopt, such as the licensing regime that is proposed as the framework for who may develop and provide to market an AI tool, which could have the benefit of ensuring that proper process and adherence to law occurs, however with some impact potentially on innovation in the space moving into the future.
United Kingdom
The UK, as another key player in AI is otherwise presently working on a set of pro-innovation regulatory principles. In Spring 2023, the UK Government published its policy paper named “A pro-innovation approach to AI regulation” open for consultation until June 21st, 2023.
This approach consists of five principles:
- Safety, security, and robustness (i.e. AI systems in the UK need to have been trained and built on robust data)
- Appropriate transparency and explainability (i.e. how the system works should be explainable to their users)
- Fairness (i.e. AI should not undermine individuals’ legal rights)
- Accountability and governance (i.e. AI systems must have appropriate oversight over the way it is used and clear lines of accountability)
- Contestability and redress (i.e. there need to be avenues for redress if an AI system causes harm)
There is however no present intention to make these principles into legislation, instead the UK is seeking industry guidance as to how to implement a best practice framework for those offering AI systems. This in combination with a central monitoring system and risk register, as well as other measures, to oversee more broadly how the industry is operating. setting or causing AI to be required to meet technical standards is also another measure intended and set out by UK Government’s white paper, last updated 3 August 2023.
It is also the UK’s intention to address AI in law by updating existing legislation i.e. privacy laws, data protection laws and consumer laws to underwrite what may occur within the AI space. Though this is a sensible approach and would be effective, it is never a quick process to make changes to existing legislation and so any real change and impact may not come for some time but in the long term would have significant effectiveness in regulation and protection of rights. Overall however, the measures proposed by the UK appear to have the greatest structure to date, in terms of implementation and genuine impact/protection, and so it will be interesting to see how this progresses.
Global Entities/United Nations
Finally, we may quickly note that at the international level, the Organisation for Economic Co-operation and Development (OECD) adopted a (non-binding) Recommendation on AI in 2019, UNESCO adopted Recommendations on the Ethics of AI in 2021, and the Council of Europe is currently working on an international convention on AI. These are largely however little more than recognitions of AI and its importance/potential impact globally and aiming to push sovereign governments to consider regulation and protections in this space, as we have discussed above. The same however can inadvertently influence individual company/provider policies, due to the flow on effects and impact on share values, in the pursuit of corporate social responsibility and the possible consequences of not doing the same as competitors.
Australia
Australia is otherwise late to the party in regard to AI and has not made substantial progress in this space to date. Some risk management and mitigation strategies for those training AI systems have been provided within the report titled “Rapid response Information Report: Generative AI – language models….”, whilst the AI Ethics Principles are intended at this stage as only voluntary, despite intention to create a safer environment for AI users. These AI Principles propose to AI providers to provide AI systems which meet the following goals:
- “Human, societal and environmental wellbeing i.e. AI systems should benefit individuals, society and the environment.
- Human-centred values: AI systems should respect human rights, diversity, and the autonomy of individuals.
- Fairness: AI systems should be inclusive and accessible, and should not involve or result in unfair discrimination against individuals, communities or groups.
- Privacy protection and security: AI systems should respect and uphold privacy rights and data protection, and ensure the security of data.
- Reliability and safety: AI systems should reliably operate in accordance with their intended purpose.
- Transparency and explainability: There should be transparency and responsible disclosure so people can understand when they are being significantly impacted by AI, and can find out when an AI system is engaging with them.
- Contestability: When an AI system significantly impacts a person, community, group or environment, there should be a timely process to allow people to challenge the use or outcomes of the AI system.
- Accountability: People responsible for the different phases of the AI system lifecycle should be identifiable and accountable for the outcomes of the AI systems, and human oversight of AI systems should be enabled”.
The Australian principles neither provide concrete means for implementation or methods for how there is to be accountability and protection of privacy and security so in and of themselves these principles do not presently appear to lead to much and it appears that more will be required before any real protections are offered in Australia in regard to AI. Whilst such decisions are left in the hands of the providers there will no uniformity of application and no guarantee of protection, despite any statements or intentions to provide.
Issues with such laws as proposed to date
It is generally my view that such laws, as that proposed by the EU, will inevitably be required to consider and correlate with protections provided under privacy laws, such as the General Data Protection Regulation (GDPR) in Europe, which Australia is soon to receive a version of following the currently in discussion amendments to the Privacy Act. This will cause and require a greater focus and resources on compliance by all businesses, which will be likely costly, as well as a need to provide clear and simple ways by which to implement the measures required to comply with the law and also maintain that compliance. Similarly, though some frameworks, like the NIST framework in USA, propose consideration of such factors, like privacy, there remains no concrete requirements to limit breach of these existing legal rights and until this is established, in a manner where the AI can in fact differentiate and remove from results information which may impede on such rights, there remains a risk to users that information obtained may be used incorrectly and others be exposed, with limited recourse. To some degree this is where laws or measures, as proposed by UK, may be better equipped to address the variables, as at least by this methodology, measures and methods for implementation will already be potentially established.
Further, such laws as being suggested at present in regard to the EU, presently lack measures to allow for query or recourse by users themselves. As it stands an individual will not presently have the ability to directly make complaint to authorities or seek damages etc via court process where there is a failure to comply with the AI law. Existing laws in turn do not directly provide for recourse in regard to AI though in the common law context there is still scope for consideration by precedent and case law. There is neither a clear guideline for how such laws are to interact with laws in other jurisdictions and in turn how such laws, despite an intention to apply across borders, are to actually be enforced across borders. Again, this is where building AI into existing law, as being proposed by UK, may have a greater impact and output, as and when these changes do in fact occur.
Similarly, the proposed laws around the world, will at some stage or another be required to address the big and bold question of how does all of this correlate and continue to provide for the protection of intellectual property and trade secrets. As we have probably all seen, there is a lot of concern amongst the population, such as the actor strikes in Hollywood, over the adoption of AI technology, in particular within creative arts spaces, where intellectual property rights are heavily relied upon. The laws proposed do not at this time either address the question as to whether AI may be held to account if it uses materials subject to intellectual property rights, without proper acknowledgement or payment of licences, when it itself is not human. If not the AI itself then who else could liability sit with in such circumstances?
Further, will there be requirements for cross-checking on information obtained, as is suggested presently as a good practice measure, so as to avoid errors in data input and output? In November 2023 authors from Davies Collison Cave noted at least that in Australia there are gaps in the law to cover copyright rights against AI-generated music that replicates the voice and likeness of an existing music artist. Even holding a trade mark with one’s name or brand may provide limited protection against AI as it does not restrict the creation and spread of infringing content. Therefore, significant issues with the use and development of AI v protection of rights remain and it seems to be the case that unless these systems can identify such issues autonomously then there will remain a requirement to verify as it would be expected that the user of AI content should otherwise be held liable if such content was the subject of intellectual property rights.
To this extent, the more hands-off approaches, as considered by Australia and USA for example, also appear to miss the mark in ensuring continued protection of well established legal practices and rights.
Many questions and issues still remain on the table, to be discussed and determined, however things are moving and moving quickly and interesting times are certainly on the horizon, very different from what we have known so far.
References
Australian Government, Department of Industry, Science and Resources, “Australia’s AI Ethics Principles”, https://www.industry.gov.au/publications/australias-artificial-intelligence-ethics-framework/australias-ai-ethics-principles
Bell, G., Burgess, J., Thomas, J., and Sadiq, S. (2023, March 24). “Rapid Response Information Report: Generative AI - language models (LLMs) and multimodal foundation models (MFMs)”. Australian Council of Learned Academies
Green, Stuart, Sadler, Lachlan and White, Courtney, Davies Collison Cave, “Ok, Computer: AI, Music and IP Law in Australia”, 1 November 2023, https://www.mondaq.com/australia/trademark/1384350/ok-computer-ai-music-and-ip-law-in-australia
Department of Commerce (USA), International Trade Administration, “UK AI Regulations 2023”, 21 June 2023, https://www.trade.gov/market-intelligence/uk-ai-regulations-2023
Department of Science, Innovation and Technology, “Policy Paper – A pro-innovative approach to AI regulation”, 3 August 2023 (last updated), https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper#correction-slip
European Parliamentary Research Service, Eu Parliament Briefing - Artificial Intelligence Act, June 2023, https://www.europarl.europa.eu/RegData/etudes/BRIE/2021/698792/EPRS_BRI(2021)698792_EN.pdf
European Parliament, “EU AI Act: first regulation on Artificial Intelligence”, 8 June 2023, https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence
Gikay, Asress Adimi (Brunei University, London), The Conversation, “How the UK is getting AI regulation right” 8 June 2023, https://theconversation.com/how-the-uk-is-getting-ai-regulation-right-206701
National Institute of Standards and Technology, US Department of Commerce, “Artificial Intelligence Risk Management Framework (AI RMF 1.0), January 2023, https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf
Organisation for Economic Cooperation and Development (OECD), “Recommendation of the Council on Artificial Intelligence, 8 November 2023, https://legalinstruments.oecd.org/en/instruments/oecd-legal-0449
The White house, Blueprint for an AI Bill of Rights – Making Automated Systems Work for the American People, https://www.whitehouse.gov/ostp/ai-bill-of-rights/#:~:text=The%20Blueprint%20for%20an%20AI%20Bill%20of%20Rights%20is%20a,that%20reinforce%20our%20highest%20values.
Roberts, Huy (University of Oxford) and Hine, Emma (University of Bologna), East Asia Forum, “The future of AI Policy in China”, 27 September 2023, https://www.eastasiaforum.org/2023/09/27/the-future-of-ai-policy-in-china/