Artificial Intelligence - should we regulate AI ?

Artificial Intelligence - should we regulate AI ?

Conference presentation, 1 June 2024


Thank you for that kind introduction. 

First, as the commentator, I complement Katharine Kemp on a well written and insightful paper.

Second, the usual disclaimer: anything I say represents my own personal views, not those of Norton Rose Fulbright or any client.  

Third, I have just completed the Oxford University Programme on Artificial Intelligence, so acknowledge Oxford University as a source of information.  However, any errors are mine alone.

Commentary overview

In my commentary I’m going to cover three themes that complement Katharine’s focus on Artificial Intelligence (AI) foundation models:

  • First, why is AI now attracting our attention?   Katharine refers to this question as: "What happened in 2023?"
  • Second, what are the current regulatory themes and concerns in regulating AI foundation models?
  • Third, what next for Australia?   In essence, I’ll consider Katharine’s modest proposals in the context of global regulatory developments, but from an Australian perspective.

If we have time at the end, I may briefly touch on a couple of interesting issues we face in the regulation of AI and pose some questions.

Why is AI attracting our attention ?

As Katharine mentions, there is no universally agreed definition of AI.  Rather, AI encompasses a range of different technologies that enable machines to exhibit ‘learning’.  Colloquially, to think like a human. 

At its most basic, AI is the ability of machines to learn from data to make a predictive inference from given inputs without being explicitly programmed in each step.   This predictive power of AI has been the key to its commercial utility to date.  Better predictions enable higher quality and more nuanced decisions, in turn enabling more efficient use of resources.     

As Katharine eloquently explains, a key driver of recent innovation in artificial intelligence is the concept of “deep learning” using artificial neutral networks. 

Artificial neural networks replicate the processing and data storage techniques of the human brain, essentially with billions of interconnected neurons processing disaggregated information in parallel to recognise patterns and make predictions.  

Under the deep learning process, a neural network is trained by ingesting large volumes of training and validation data.  The training determines the weighting of the neurons in the network, known as parameter weightings.  The training process can take significant time and in the case of the foundation model “GPT-4” (while the full details are confidential) took a reputed 34 processing days of a supercomputer that were spread over some 6 months at an estimated cost of some AUD100 million. 

Once training of the foundation model is completed, the relevant knowledge is locked within the AI foundation model in the form of the weighting of the individual neurons.  These weightings can potentially be extracted as a data set to allow replication of the foundation model.   Replicated models can be supplemented and fine-tuned with additional knowledge, creating various bespoke AI applications.

The objective – think like a human…

Given the AI ideal is for machines to think like a human, how does AI currently fare?

Digital computers operate at the speed of electromagnetic waves in electricity which is just under the speed of light.  Biochemical neurons in the brain are much slower.  A modern computer can run calculations a billion times faster than a human brain.

For example, a billion seconds is some 32 years. Looking into the future, at that speed, we may find that an AI could potentially replicate some 32 years of human learning in only one second. This gives you an indication of the stunning potential of AI.

However, while comparatively slow, a human brain can undertake calculations simultaneously at a rate of 20 million billion synaptic operations per second.  In computer speak, this is a processing speed of 20 million Gigahertz.  To achieve this parallel processing capability, the human brain has some 100 billion neurons and 150 trillion synapses.   I’m sure such mind boggling numbers will be causing quite a few of those very same neurons and synapses to fire in your brain right now!

In 2024, supercomputers around the world already exceed this processing capability.  Australian scientists in Western Sydney, for example, are working on a neuromorphic supercomputer known as "Deep South" that will achieve a a similar number of synaptic operations per second.

A neuron in the human brain is roughly comparable to a weighted parameter in an artificial neural network.   GPT-3 involves 175 billion neurons, so roughly double the size of the human brain.   GPT-4 which underpins Microsoft Copilot has roughly 10 times the neurons in a human brain.

However, while these foundation models represent a paradigm shift in AI and exhibit some human-like reasoning, they are still specialised not generalised.  They are not yet true "Artificial General Intelligence".  We still have some way to go before machines can truly think like a human.

2024: Big data, deep learning, next generation chips

Katharine asked what happened in 2023.  I’m going to assess where we are in June 2024.   There are three further relevant innovations:

  • First, the number of transistors on semiconductors has been increasing over time.  The latest artificial intelligence chips, as displayed on screen, have more than 200 billion transistors. Meanwhile, artificial neural networks are using transistors more efficiently to reduce the aggregate number of transistors required.  This means it is becoming easier to replicate AI foundation models across different devices over time, assisting commercialisation.
  • Second, we are in the era of ‘Big Data’.  In order to train neural networks with billions of parameters, the training data set must be similarly vast.  The accuracy of a foundation model is determined by the volume and accuracy of the training data used to create it.  Continuing concerns regarding training error and hence so-called ‘hallucinations’ have led to use of supplemental knowledge.  Copilot, for example, supplements GPT-4 with access to data from personal Microsoft services, plus the Internet.
  • Third, innovation has occurred in large language models.  As Katharine mentions, a new configuration of neutral network was developed by Google scientists in 2017 known as “transformer architecture”.  This architecture enables more efficient and effective training of neural networks on large data sets.  OpenAI experimented with using super-sized training data sets, creating GPT-3.  OpenAI hit global headlines when GPT-3 was supplemented with a web-enabled natural language interface to create ChatGPT, which was accessible through an Internet browser.

Ultimately, to answer to Katharine’s question as to what happened in 2023, we have experienced a happy confluence of semiconductor evolution, innovations in neural network architecture, availability of Big Data, and the supply of AI applications at a low cost to consumers, even free in some instances.  Collectively, this has captured the world’s imagination and generated the current AI hype.

Global antitrust and regulatory issues

Next, I’ll address the question as to what the current global antitrust and regulatory themes and concerns are in regulating AI foundation models. 

With the increased profile and ability of AI, governments have become concerned about the risks of AI.  We are not so-much concerned about the existential risk to humans popularised in science fiction movies such as Terminator.  We are more concerned with lesser (but still serious) risks involving potential harm to society, including antitrust and consumer protection risks. 

As Katharine indicates, there are many historic examples where algorithms have historically caused harm that are illustrative.  In Australia, the so-called ‘Robodebt’ scandal is an unfortunate case study in the problems that can arise where staff are too trusting in algorithms.   

AI is now an integral part of our competitive landscape

This then brings us to competition law.  Fundamentally, modern competition law is generic in its application, but considers the bespoke competitive circumstances of each market.   As such, competition analysis will necessarily require consideration of AI as an integral part of the future competitive landscape.   Specific competition issues may also arise in the context of the use of AI.

In her paper, Katharine has grouped the potential harms into three key buckets:

  • competition concerns with foundation models and digital platform power, which she explains as largely involving concerns arising from the risks of vertical integration and/or the risks of entrenching pre-existing market power;
  • consumer protection concerns, which she identifies to include lack of transparency, misleading conduct, unfair practices, and poor quality or unsafe AI models and applications; and
  • broader risks from foundation models, which she categorises as including environmental, privacy, labour market, economic, geopolitical and safety concerns.

Katharine has then made some modest proposals, including with respect of regulation of AI in Australia going forward.

At the outset, as Katherine has mentioned, there are clear difficulties in over-regulating fast-moving innovation markets. First, such regulation can stifle innovation at a considerable opportunity cost. Second, such regulation can be circumvented, even ignored, by more reckless industry participants in their pursuit of a first-mover advantage and billions in profits. The latter can result in a very extensive liability mess to be resolved by lawyers and courts after-the-fact, as the recent cryptocurrency scandals possibly demonstrate.

My approach in this commentary is to consider Katharine’s modest proposals in the context of a quick global tour of the key antitrust jurisdictions, namely the European Union and the United States, and then to consider what is proposed in Australia.

So, ladies and gentleman, please buckle your seat belts and let’s begin...

Developments in the European Union

We fly first from Sydney to Brussels.  I’ll start with the European Union, which is probably the furthest advanced in the world to date at regulating AI outside China.   

The main regulatory initiative in Europe has been the Artificial Intelligence Act.  This Act passed through the European Parliament on 21 May 2024, hence only two weeks ago and is due to come into force later this month. As such, this conference session is perfectly timed. 

The Act sets out a tiered regulatory framework where the level of regulation is matched to the perceived level of risk.  The Act classifies AI applications by their risk of causing harm.   Certain harmful AI practices are prohibited.  Greater regulation is imposed on higher risk sectors, such as health and finance.   A variety of new institutions are created, including a European Artificial Intelligence Board.

The European AI Act is part of the set of regulation that now regulates digital markets in the EU under the EU’s Digital Strategy, including the GDPR, the Digital Services Act, and the Digital Markets Act.   As such, the AI Act has a GDPR flavour with an emphasis on protecting individuals and preserving human rights.   The regulatory approach is fairly interventionist with a focus on transparency and accountability.   

There are a few points relevant to competition law:

  • First, providers of general-purpose AI will be required to maintain technical documentation, including details of training and testing processes and evaluation results.  Such information will be useful to competition authorities to understand AI models in the content of competition investigations.
  • Second, the European Commission has indicated that it will view AI competition issues partly through the prism of the Digital Markets Act, which is the EU’s new sectoral competition regime for digital markets.
  • Third, the European Commission has been active in upskilling on AI competition regulation, including running a consultative process earlier this year on the intersection of generative AI with competition law.

Developments in the United States

That was Europe. Now let’s quickly jump on a trans-Atlantic flight from Brussels to Washington DC and continue our global tour, now in the United States.

I won’t focus as much on US legislative initiatives, as antitrust regulation in the US is evolving more through the regulators, not Congress.  However, various States are enacting their own AI legislation, including, for example, the State of Colorado a few weeks ago.  The US seems to be heading towards a patchwork of State-based regulation.

As you may know, draft AI legislation was tabled in Congress but has not had bipartisan Democrat and Republican support.  This proposed legislation is light-handed in its approach given concerns that excessive regulation may stifle innovation, particularly as many key global AI providers are US companies.  The proposed federal legislation had a greater focus on certification of AI models to ensure they met minimum quality and safety standards.  

Moving then to the US antitrust regulators, they are currently striving to give effect to President Biden’s executive order of July 2021 which shifted the focus in US antitrust enforcement more towards fairness and social justice.  Relevantly, artificial intelligence was frequently mentioned in the various sessions at the ABA Spring Meeting in Washington DC last month that I was lucky enough to attend.  MLex commented in its follow-up commentary that the advent of AI was the standout conference theme.   

The Federal Trade Commission (FTC) claimed at the ABA Spring Meeting that it was the leading antitrust authority in the world in its aggressive regulation of artificial intelligence.  Most enforcement has occurred under Section 5 of the Federal Trade Commission Act, which prohibits unfair methods of competition affecting commerce.  Under chair Lina Khan, the FTC had issued a new Policy Statement on Section 5 in November 2022 leading to significant changes in the application of Section 5, including conduct that "violates the spirit of antitrust laws".  

Relevantly, the FTC described the AI environment as “chaotic” due to its rapid evolution, hence an absence of documentary evidence was impeding fact-finding in FTC investigations. 

There were also number of other thoughts on artificial intelligence expressed at the conference:

  • First, regulators were concerned about the unique economics of AI foundation models, particularly their vast development costs and reliance on huge data sets.  Such economics could entrench market power by raising barriers to entry.  
  • Second, regulators were concerned that companies could potentially avoid merger control in AI by acquiring human talent, with reputed millions of dollars being paid to attract key personnel, necessarily requiring a greater focus on competition in innovation and labour markets.
  • Third, there was much commentary on the potential for communal use of algorithms in a manner that raises potential concerns, including in relation to information sharing and algorithmic price collusion. 
  • Finally, there was also a significant focus on general consumer protection, including that AI was facilitating more sophisticated fraud, such as voice cloning and deep fakes.  The Australian Competition and Consumer Commission (ACCC) attendee at the conference commented that such concerns were taken particularly seriously by the ACCC in Australia.

Developments in Australia

So, that was our quick tour of the United States.  We can now leave Washington DC and fly home to Sydney.   What has been happening here in Australia?

The Human Technology Institute (HTI) published a paper last year which provided an overview of AI governance in Australia.   HTI concluded that Australia was lagging in the design and implementation of AI-specific regulation. 

HTI called for Australian corporates to develop a deeper understanding of the use of AI in their existing operations as well as to better appreciate the legal obligations attaching to AI, including competition laws and consumer protection obligations.

To date, there is no generally applicable law regulating AI in Australia, rather there has been a voluntary set of AI ethics principles at the federal level, plus some State-based AI guidance.  Earlier this year, the Australian government consulted on safe and responsible use of AI. The federal government proposed development of a federal risk-based AI framework with various regulatory features graduated to the level of risk.

I was involved from the dual perspectives of technology law and competition law in assisting with the Law Council of Australia’s submission to the federal government in response.  The Law Council advocated for a precautionary approach where regulation would only be considered if there was evidence that existing laws and regulations were insufficient. The Law Council did not advocate the adoption of any particular regulatory model, but rather indicated Australia has an opportunity to learn from the experiences of other jurisdictions.

More recently, we have seen the establishment of an artificial intelligence group to advise the federal government.  We have also seen a Productivity Commission report and Senate Select Committee inquiry.   As you will appreciate, this is a red hot area of current policy debate. To illustrate the divergent views on the need to regulate AI, the Productivity Commission's conclusions are summarised in the following quote by eminent Australian competition economist Stephen King:

"“The transformative potential of artificial intelligence can seem daunting. Certainly, there are risks that require regulation. But calls for new AI-specific regulations are largely misguided.

Most uses of AI are already covered by existing rules and regulations, such as consumer, competition, privacy and anti-discrimination laws. These laws are not perfect, and many have well-recognised limitations that are already being examined by government. AI may raise new challenges for these laws, for example, by making it easier to mislead consumers or by AI algorithms helping businesses to collude on prices.

The key however, is that the laws exist. We generally don’t need new laws for AI. Rather, we need to examine existing regulations and better explain how they apply to the uses of AI.”

This then brings me to Katharine’s modest proposals in her paper in relation to fair and effective competition and consumer protection. 

A threshold point is that the overall regulation of AI across the globe is very much in its infancy.  Indeed, AI regulation is literally evolving in real time.  Such regulation will shape the competitive landscape for AI and alter the competitive dynamics within markets as AI continues to evolve.  As such, antitrust issues are not divorced from wider issues in regulating AI.  The global regulatory responses to these issues will very much determine the nature of AI innovation and competition into our future and assist to shape our future world.

Bearing this caveat in mind, Katharine’s modest proposals have merit:

  • Katharine calls for interdisciplinary collaboration.  The global experience demonstrates that the regulatory issues we are facing are indeed multi-dimensional and multi-disciplinary.  
  • Katharine calls for development of regulatory expertise and resources.  I agree.  AI is complex and not well understood by lawyers or regulators at this time.   The FTC has commented that AI documentation is chaotic, hence regulators may be facing a rather slippery climb up the AI learning curve.
  • Katharine calls for international alignment with other regulators.  I assume co-operation will occur as a matter of course to pool scarce global expertise.  The greater challenge will be global regulatory harmonisation, but that is also an issue we face more generally in any form of regulation across the world. 

Finally, Katharine calls for a specific unfair practices provision.   I express no view on that, but do note that the FTC has been using the US-equivalent to regulate artificial intelligence in the United States in the absence of a federal sectoral regulatory regime.  The extent to which this could be appropriate for Australia would need to be carefully considered because we do already have an extensive range of consumer protections in the Australian Consumer Law.

Further observations

That ends my commentary on Katharine’s paper.  However, I’m also going to live a little dangerously and test my luck by skating on thin ice.

I’ll speculate as to some issues we may face as competition lawyers in addressing what is effectively a ‘black box’ algorithmic architecture, including from both a regulatory and compliance perspective.    

How do we regulate an algorithmic ‘black box’ ?

This then takes me to a final issue, namely the future challenges we may face as lawyers in dealing with algorithmic ‘black boxes’.

When faced with an algorithmic pricing decision, one of our first steps as competition lawyers would be to ask questions directed at understanding how the algorithm worked.  As lawyers, we want to understand the facts as the route to the truth, hence we would start drilling into the detail.  For example, in a price-fixing investigation, has the algorithm actually co-ordinated prices? 

Herein lies the challenge.  Artificial neutral networks are something of a ‘black box’, but in an usual way.  As I understand it, it is theoretically possible to extract the numerical weightings of the billions of neurons in an artificial neural network to provide complete transparency.  However, such information would be entirely meaningless to mere mortals given the internal algorithmic logic of the AI would be fundamentally unknown.

This then points to a more general issue we face in 2024, which is one of assessing the quality and accuracy of an AI foundation model.   Other jurisdictions have so far sought to address this issue either by requiring good record-keeping regarding training data and processes or by potential of certification of AI models.  The question arises, will that be enough?

Thankfully I don’t need to offer answers in this presentation.  I am merely the commentator posing a few interesting questions.  Innovation is also currently occurring in providing greater transparency into AI. However, I do suspect that this will be a matter that lawyers, regulators and the courts may well be grappling with for many years to come.

What next ?

This finally brings us to the final question: what next?

For some fun, I asked Microsoft Copilot to predict the future of AI. 

Going forward, Copilot assures me that AI will continue to achieve rapid progress.  OpenAI and Microsoft, for example, currently have a long-term partnership in which Microsoft is providing cloud-based supercomputing capability to underpin the research, deployment and commercialisation of artificial intelligence by Open AI.   Supercomputers improve the velocity of the development of AI by allowing AI models to train more quickly on larger, deeper and more granular datasets.  

OpenAI released GPT-4 last year. GPT-4 is a multi-modal AI which can combine data sets from different sources, hence replicating the human brain’s ability to process information from multiple sources (i.e., our five human senses such as sight and hearing).  GPT-5 is currently under development and reputedly due for release later this year.   Beyond this, the next wave of innovation is likely to be multi-modal AI models that can generate a full artificial reality environment.

One insight from Copilot is that AI will be progressively deployed into our commercial and competitive landscape.   Essentially, it will change our world. However, as mentioned, we still have a long way to go before we achieve full artificial general intelligence.  

Given some of the nuanced competition issues that are likely to arise, as well as the likely difficulties in dealing with ‘black box’ algorithmic architectures, hopefully lawyers will still have jobs for many years to come notwithstanding the implementation of AI into the legal profession.

Many thanks for your time and I hope you found this interesting.

Russell V Miller AM

Fellow, Centre for Strategy and Governance

1y

Martyn, thanks for your interesting and thoughtful commentary. A very useful overview. Regards Russell

To view or add a comment, sign in

Others also viewed

Explore topics