SlideShare a Scribd company logo
Global Governance of
Generative AI : the Best
Way Forward?
Lilian Edwards
Professor of Law, Innovation &
Society
Newcastle University
Lilian.edwards@ncl.ac.uk
Twitter : @lilianedwards
Flinders, November 2023
(artwork by James Stewart, Edinburgh University, using
MidJourney)
Bender, Gebru, Mitchell : On the Dangers of Stochastic Parrots:
Can Language Models Be Too Big? ACM FAccT, 2021
What this talk isn’t about.. (6/11/23)
Summary
• What is generative AI?
• Risks and harms
• Global models for
governance
• Assessment
• The future for Australia?
A.What is generative AI?
Generative AI; general purpose AI”; “foundation models”; “frontier AI”
Types of large “foundation” models
• GPT-3 (Open AI/Microsoft)(text prompt to
text output)(June 2020)
• “Large Language Model” or LLM
• ChatGPT, November 2022
• GPT4, March 2023
• Multimodal, ChatGPT+ and via API
• Integration into Bing
• BARD (Google)
• DALL-E 2, 3 (text to images – Google)
• CoPilot (prompt generates computer code –
GitHub/OpenAI)
• Meta Make-me-A-Video (text to video - Meta)
• Stable Diffusion (open source – text to image
(T2I) from StabilityAI)
• Midjourney (commercal API text to image
(T2I()
• HarmonAI – makes music from prompts
• WuDao (LLM – China)
Global Governance of Generative AI: The Right Way Forward
B. What’s the problem? (a) Disinfo/misinfo/deepfakes
Global Governance of Generative AI: The Right Way Forward
https://www.bbc.com/news/world-us-canada-65069316
Global Governance of Generative AI: The Right Way Forward
(b)“Hallucination”
P Hacker
“The propensity of ChatGPT
particularly to hallucinate when it does
not find readymade answers can be
exploited to generate text devoid of any
connection to reality, but written in the
style of utter confidence”
https://www.education.sa.gov.au/parents-and-
families/curriculum-and-learning/ai#
nurse
Stable Diffusion
doctor
DALL-E
2
( c) Bias & stereotyping
(f) Competition/Anti-trust
…and more
(d)Labour issues
(e)Environmental
harms
New York Times
Who are we regulating?
Ecology of upstream providers and downstream deployers
C. Global AI Regulation & the rise of generative AI
Models for governance of generative AI
• Comprehensive, mandatory regimes eg EU AI Act
• Vertical new regimes of regulation – China
• No mandatory regulation but… +++
• No new regulation – UK
• Pre-existing relevant laws
• Eg Copyright
• Eg Data protection
• Voluntary guidelines – “co-regulatory” instruments
• Private ordering
Solution 1 : “comprehensive” - the EU AI Act – before gen AI
AIA “risk based” approach
• Unacceptable risk – ‘Complete’
prohibition, 4 examples – Article 5
• High-risk –Fixed categories of
risky domains, based on intended
use ; “essential requirements”
including dataset quality, human
oversight –
• Limited risk – Transparency
obligations for a few AI systems
(chatbots, deepfakes, emotion ID,
biometric categorisation) – Article
52
• Minimal risk – Codes of conduct –
Article 69
Photo Source: European Commission, Digital Strategy Website
https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
High-Risk AI systems (annex III)
• Annex III - Services intended for these uses..
• Biometric identification and categorisation of natural persons;
• Management and operation of critical infrastructure;
• Education and vocational training;
• Employment, workers management and access to self-employment;
• Access to and enjoyment of essential private services and public
services and benefit;
• Law enforcement;
• Migration, asylum and border control management;
• Administration of justice and democratic processes
• Recommender algorithms… (EU Parl)
• Risk management system
• Data and data governance ; (“data quality”)
• Technical documentation
• Transparency and provision of information to users;
• Human oversight
• “Human oversight shall aim at preventing or minimising the risks to health, safety or
fundamental rights that may emerge when a high-risk AI system is used in accordance
with its intended purpose or under conditions of reasonably foreseeable misuse”
• Accuracy, robustness and cybersecurity
(see Edwards AI Act explainer for Ada Lovelace Institute, 2022)
High-Risk AI systems (Requirements) – Arts. 8 - 15
ChatGPT enters the chat…
• No “intended use” !!
• Definition of gen AI conflicted - OECD? Who regulates?
• Should duties/liability lie on developers or deployers?
• Practical power & profit benefit vs unforeseeable risk & restraint innovation
• High risk, limited risk, the future of AI?
• Copyright & creative industries special worries
• Environmental special worries
• Competition: peimarily a matter for the Digital Markets Act??
• Open vs proprietary models - risks & benefits
EU Parliament proposed “Foundation model”
duties, May 2023
• Pre-audit of design, testing etc to assess,
remove or mitigate foreseeable risks
• Data quality duties – as with high risk AI
• Performance on metrics to be checked
throughout lifecycle - by independent experts
• Environmental by design
• Information sharing to downstream
deployers– but while maintaining trade
secrets (?)
• Be transparent as to copyright works used
• Public register of foundation models
Solutions 2 : New vertical rules - China
Addiction?
“prevent users' over-reliance
upon or addiction to the
AIGC”
Censorship?
“ensure AI generated content (AIGC)
is legal, consistent with the core
values of the socialist system and free
from discrimination and fake
information”
Transparency+
“guide users to scientifically understand
and rationally use the AIGC”
License needed for providers from
security services
Solution 3(a) – no new law
• UK “pro-innovation” “world-leading”
approach
• March 2023 White Paper
• Running against EU trend towards hard
regulation of AI
• Sectoral regulators eg ICO, CMA, FCA to
coordinate on regulation and apply
crosscutting “principles”
• No new powers, no new law.. So far
• But UK industry selling into EU will still
need to meet EU AIA standards!
• ?? AI Safety Institute = global hub for[what
type of?] regulation?! Law haven??
• Discrimination/ equality law
• Liability/ product liability
• Product safety
• Copyright
• Andersen v. Stability AI
• Content regulation, hate
speech, libel, advertising, fake
news
• Privacy & data protection
• Italy Garantie Chat GPT ban
• Competition law
• Labour law
• Human rights
Solution 3(b) : Pre-existing laws
Solution 3( c) – international voluntary
principles
• OECD Principles on AI, 2019
• “Four years on” review, 2023
• G7/GPAI/OECD “Hiroshima”
Principles for “Advanced AI”
• Guiding principles
• Code of Conduct
• “risk based approach”
• Endorsed EU , 30 October 2023 
• Identify, evaluate, and mitigate risks
across the AI lifecycle
• Publicly report advanced AI systems’
capabilities, limitations and domains
of appropriate and inappropriate use
• Information sharing across countries
and reporting of incidents
• Implement robust security controls
across AI lifecycle
Global Governance of Generative AI: The Right Way Forward
US Draft Blueprint for AI Bill of Rights, Oct 2022 & NIST standards work
Solution 3 ( c) : National voluntary principles : US
• US Biden/Harris Executive
Order Nov 2023
• Looks good but only actually
binds public agencies where
President had pre existing
powers
• US’s own AI Safety Institute
• Practical, near term emphasis
• generative misinformation ;
• watermarking for
transparency;
• AI generated robocalls
deceiving elderly
• Content moderation
• Will operationalize NIST
technical framework for
reducing AI risks
• Using govt procurement tools
Private ordering: empirical work, Jan-Mar 2023
Copyright
(eg service)
Privacy &
data
protection
Consumer
protection&
dispute
resolution
Illegal/harmful
content;
content
moderation;
misinfo
Text to text
(language
models)
Egs ChatGPT
4.0, Clova
(SKorea)
6 models
Text to image
(T2I)
Egs Lensa,
DALL-E 2.0
4 models
Text to
video/audio
Synthesia
(T2Audio)
6 models
• T&C routinely assign
risk and liability for
harmful or infringing
outputs entirely to
users
• “Platformisation”
• Positioned as
intermediary services,
like social media or
search, even though
actually publishers ?
• But evade incoming
regulation of
“traditional”platforms
Australia?
• “Safe and Responsible AI in Australia”,
Dept of Industry, June 2023
• Domestic sectoral contributions eg
eSafety Commissioner on deepfake
CSAM
• Takeaways?
• Complex governance landscape
• Key areas; competition, govt use of AI
• “relatively small, open economy”
• -> Global governance harmonisation
important
• Spectrum from voluntary to mandatory
--leans towards former
• “Risk management” toolkits for risk-
based cases - clever pick n mix!
Assessment of global governance of genAI
• Will the EU AIA become a “gold standard” global model, like the GDPR?
• “General purpose AI” section rushed & fits badly w basic framework of AIA & EU digital acquis
• Over-regulation? Too soon? Anti-innovation?
• But we said that about social media & search – get in fast this time!
• Voluntary / self regulation models
• Do we need international harmonization on AI? If so, a race to the bottom – vague promises
eg Bletchley - or the top – a Council of Europe Convention on AI model? FRIA?
• Is debate on AI safety being being fanned by tech industry as self regulatory strategy - to avoid
the detailed regulation of current harms the AIA envisages, in favour of future unlikely risks?
• Self and co-regulation mainly negotiated between industry and politicians, for growth, not rights
• Where is the voice of those most affected by AI harms?? UK AI Summit had zero UK civil society
at it. Almost no Global South.
• “Global” governance?
• Rule by rich white technocratic Western men rather than states?
• Tech industry as the new states?
• Transnational conflicts loom: A “Beijing” effect as well as a “Brussels” effect?
Global Governance of Generative AI: The Right Way Forward
Global Governance of Generative AI: The Right Way Forward

More Related Content

PPTX
Responsible AI
PDF
Generative AI - Responsible Path Forward.pdf
PDF
“Responsible AI: Tools and Frameworks for Developing AI Solutions,” a Present...
PPTX
How to regulate foundation models: can we do better than the EU AI Act?
PDF
Generative AI and Security (1).pptx.pdf
PPTX
AI Governance and Ethics - Industry Standards
PDF
How to build a generative AI solution From prototyping to production.pdf
PPTX
Generative AI and law.pptx
Responsible AI
Generative AI - Responsible Path Forward.pdf
“Responsible AI: Tools and Frameworks for Developing AI Solutions,” a Present...
How to regulate foundation models: can we do better than the EU AI Act?
Generative AI and Security (1).pptx.pdf
AI Governance and Ethics - Industry Standards
How to build a generative AI solution From prototyping to production.pdf
Generative AI and law.pptx

What's hot (20)

PDF
Generative AI: Past, Present, and Future – A Practitioner's Perspective
PDF
How Does Generative AI Actually Work? (a quick semi-technical introduction to...
PDF
Leveraging Generative AI & Best practices
PDF
An Introduction to Generative AI - May 18, 2023
PDF
Unlocking the Power of Generative AI An Executive's Guide.pdf
PDF
Exploring Opportunities in the Generative AI Value Chain.pdf
PPTX
Generative AI in Healthcare Market.pptx
PDF
14 2 2023 - AI & Marketing - Hugues Rey.pdf
PPTX
Generative AI Use-cases for Enterprise - First Session
PPTX
Generative AI Use cases for Enterprise - Second Session
PPTX
ChatGPT, Foundation Models and Web3.pptx
PDF
Introduction to LLMs
PPTX
Using Generative AI
PPTX
Generative AI, WiDS 2023.pptx
PDF
Large Language Models - Chat AI.pdf
PDF
𝐆𝐞𝐧𝐞𝐫𝐚𝐭𝐢𝐯𝐞 𝐀𝐈: 𝐂𝐡𝐚𝐧𝐠𝐢𝐧𝐠 𝐇𝐨𝐰 𝐁𝐮𝐬𝐢𝐧𝐞𝐬𝐬 𝐈𝐧𝐧𝐨𝐯𝐚𝐭𝐞𝐬 𝐚𝐧𝐝 𝐎𝐩𝐞𝐫𝐚𝐭𝐞𝐬
PDF
Responsible Generative AI
PPTX
Future of AI - 2023 07 25.pptx
PDF
Cavalry Ventures | Deep Dive: Generative AI
PPTX
AI FOR BUSINESS LEADERS
Generative AI: Past, Present, and Future – A Practitioner's Perspective
How Does Generative AI Actually Work? (a quick semi-technical introduction to...
Leveraging Generative AI & Best practices
An Introduction to Generative AI - May 18, 2023
Unlocking the Power of Generative AI An Executive's Guide.pdf
Exploring Opportunities in the Generative AI Value Chain.pdf
Generative AI in Healthcare Market.pptx
14 2 2023 - AI & Marketing - Hugues Rey.pdf
Generative AI Use-cases for Enterprise - First Session
Generative AI Use cases for Enterprise - Second Session
ChatGPT, Foundation Models and Web3.pptx
Introduction to LLMs
Using Generative AI
Generative AI, WiDS 2023.pptx
Large Language Models - Chat AI.pdf
𝐆𝐞𝐧𝐞𝐫𝐚𝐭𝐢𝐯𝐞 𝐀𝐈: 𝐂𝐡𝐚𝐧𝐠𝐢𝐧𝐠 𝐇𝐨𝐰 𝐁𝐮𝐬𝐢𝐧𝐞𝐬𝐬 𝐈𝐧𝐧𝐨𝐯𝐚𝐭𝐞𝐬 𝐚𝐧𝐝 𝐎𝐩𝐞𝐫𝐚𝐭𝐞𝐬
Responsible Generative AI
Future of AI - 2023 07 25.pptx
Cavalry Ventures | Deep Dive: Generative AI
AI FOR BUSINESS LEADERS
Ad

Similar to Global Governance of Generative AI: The Right Way Forward (20)

PPTX
Tendencias globales en la regulación de la IA y estándares tecnológicos asoci...
PPTX
Marsden AI coregulation in Europe, Macquarie AILS
PDF
Responsible use of ai in governance(1)
PDF
AI Act: Key Provisions of the First Major Law Regulating AI
PDF
Frontier AI Regulation: What form should it take?
PPTX
Generative AI Regulation in the UK and Australia CADE Marsden.pptx
PPTX
International AI Law Neumann 04232023.pptx
PDF
AI Regulations Worldwide_ Canada, China, and the EU's Legal Frameworks.pdf
PDF
FROM VICTIM TO VICTOR:HOW ASSET RESCUE SPECIALIST RECOVERED MY ASSET
PDF
Navigating-the-future-strategies-for-evolving-ai-regulation-20241018094518Yht...
PPTX
Ethical Dimensions of Artificial Intelligence (AI) by Rinshad Choorappara
PPTX
Current Regulations in Artificial Intelligence (AI).pptx
PDF
Regulating Generative AI: A Pathway to Ethical and Responsible Implementation
PPTX
[DSC DACH 23] AI Regulation - How to implement AI legally compliant? - Alexan...
PDF
A full analysis of the available Security Frameworks for AI
PPTX
Generative AI, responsible innovation and the law
PPTX
The Governance of Emerging Technologies.pptx
PDF
The Future is in Responsible Generative AI
PPTX
AI and Robotics policy overview - Adam Thierer (Aug 2022)
PDF
A Roadmap for Responsible AI Leadership in Canada
Tendencias globales en la regulación de la IA y estándares tecnológicos asoci...
Marsden AI coregulation in Europe, Macquarie AILS
Responsible use of ai in governance(1)
AI Act: Key Provisions of the First Major Law Regulating AI
Frontier AI Regulation: What form should it take?
Generative AI Regulation in the UK and Australia CADE Marsden.pptx
International AI Law Neumann 04232023.pptx
AI Regulations Worldwide_ Canada, China, and the EU's Legal Frameworks.pdf
FROM VICTIM TO VICTOR:HOW ASSET RESCUE SPECIALIST RECOVERED MY ASSET
Navigating-the-future-strategies-for-evolving-ai-regulation-20241018094518Yht...
Ethical Dimensions of Artificial Intelligence (AI) by Rinshad Choorappara
Current Regulations in Artificial Intelligence (AI).pptx
Regulating Generative AI: A Pathway to Ethical and Responsible Implementation
[DSC DACH 23] AI Regulation - How to implement AI legally compliant? - Alexan...
A full analysis of the available Security Frameworks for AI
Generative AI, responsible innovation and the law
The Governance of Emerging Technologies.pptx
The Future is in Responsible Generative AI
AI and Robotics policy overview - Adam Thierer (Aug 2022)
A Roadmap for Responsible AI Leadership in Canada
Ad

More from Lilian Edwards (20)

PPTX
Can ChatGPT be compatible with the GDPR? Discuss.
PPTX
What Do You Do with a Problem Like AI?
PPTX
The GDPR, Brexit, the UK and adequacy
PPTX
Slave to the Algorithm 2016
PPTX
Cloud computing : legal , privacy and contract issues
PPTX
Privacy, the Internet of Things and Smart Cities
PPTX
From Privacy Impact Assessment to Social Impact Assessment: Preserving TRrus...
PPTX
UK copyright, online intermediaries and enforcement
PPTX
The GDPR for Techies
PPTX
the Death of Privacy in Three Acts
PPTX
Revenge porn: punish, remove, forget, forgive?
PPTX
From piracy to “The Producers?
PPTX
The Death of Privacy in Three Acts
PPTX
Police surveillance of social media - do you have a reasonable expectation of...
PPTX
IT law : the middle kingdom between east and West
PPTX
What do we do with aproblem like revenge porn ?
PPTX
Slave to the Algo-Rhythms?
PPTX
9worlds robots
PPTX
The death of data protection
PPTX
The death of data protection sans obama
Can ChatGPT be compatible with the GDPR? Discuss.
What Do You Do with a Problem Like AI?
The GDPR, Brexit, the UK and adequacy
Slave to the Algorithm 2016
Cloud computing : legal , privacy and contract issues
Privacy, the Internet of Things and Smart Cities
From Privacy Impact Assessment to Social Impact Assessment: Preserving TRrus...
UK copyright, online intermediaries and enforcement
The GDPR for Techies
the Death of Privacy in Three Acts
Revenge porn: punish, remove, forget, forgive?
From piracy to “The Producers?
The Death of Privacy in Three Acts
Police surveillance of social media - do you have a reasonable expectation of...
IT law : the middle kingdom between east and West
What do we do with aproblem like revenge porn ?
Slave to the Algo-Rhythms?
9worlds robots
The death of data protection
The death of data protection sans obama

Recently uploaded (20)

PDF
TRAFFIC-MANAGEMENT-AND-ACCIDENT-INVESTIGATION-WITH-DRIVING-PDF-FILE.pdf
PPTX
Law of Torts , unit I for BA.LLB integrated course
PDF
Kayla Coates Wins no-insurance case Against the Illinois Workers’ Benefit Fund
PDF
New York State Bar Association Journal, September 2014
PPTX
prenuptial agreement ppt my by a phd scholar
PDF
The AI & LegalTech Surge Reshaping the Indian Legal Landscape
PDF
Notes on Plausibility - A Review of the English and EPO Cases
PPTX
Learning-Plan-4-Core-Principles.pptx htts
PDF
SUMMARY CASES-42-47.pdf tax -1 257++/ hsknsnd
PPT
Cyber-Crime-in- India at Present day and Laws
PDF
The Advocate, Vol. 34 No. 1 Fall 2024
PPTX
Lecture Notes on Family Law - Knowledge Area 5
PDF
APPELLANT'S AMENDED BRIEF – DPW ENTERPRISES LLC & MOUNTAIN PRIME 2018 LLC v. ...
PPTX
Ethiopian Law of Contract short note.pptx
PPT
Understanding the Impact of the Cyber Act
PDF
Constitution of India and fundamental rights pdf
PPTX
UDHR & OTHER INTERNATIONAL CONVENTIONS.pptx
PPTX
FFFFFFFFFFFFFFFFFFFFFFTA_012425_PPT.pptx
PPTX
RULE_4_Out_of_Court_or_Informal_Restructuring_Agreement_or_Rehabilitation.pptx
PDF
Trademark, Copyright, and Trade Secret Protection for Med Tech Startups.pdf
TRAFFIC-MANAGEMENT-AND-ACCIDENT-INVESTIGATION-WITH-DRIVING-PDF-FILE.pdf
Law of Torts , unit I for BA.LLB integrated course
Kayla Coates Wins no-insurance case Against the Illinois Workers’ Benefit Fund
New York State Bar Association Journal, September 2014
prenuptial agreement ppt my by a phd scholar
The AI & LegalTech Surge Reshaping the Indian Legal Landscape
Notes on Plausibility - A Review of the English and EPO Cases
Learning-Plan-4-Core-Principles.pptx htts
SUMMARY CASES-42-47.pdf tax -1 257++/ hsknsnd
Cyber-Crime-in- India at Present day and Laws
The Advocate, Vol. 34 No. 1 Fall 2024
Lecture Notes on Family Law - Knowledge Area 5
APPELLANT'S AMENDED BRIEF – DPW ENTERPRISES LLC & MOUNTAIN PRIME 2018 LLC v. ...
Ethiopian Law of Contract short note.pptx
Understanding the Impact of the Cyber Act
Constitution of India and fundamental rights pdf
UDHR & OTHER INTERNATIONAL CONVENTIONS.pptx
FFFFFFFFFFFFFFFFFFFFFFTA_012425_PPT.pptx
RULE_4_Out_of_Court_or_Informal_Restructuring_Agreement_or_Rehabilitation.pptx
Trademark, Copyright, and Trade Secret Protection for Med Tech Startups.pdf

Global Governance of Generative AI: The Right Way Forward

  • 1. Global Governance of Generative AI : the Best Way Forward? Lilian Edwards Professor of Law, Innovation & Society Newcastle University Lilian.edwards@ncl.ac.uk Twitter : @lilianedwards Flinders, November 2023 (artwork by James Stewart, Edinburgh University, using MidJourney) Bender, Gebru, Mitchell : On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? ACM FAccT, 2021
  • 2. What this talk isn’t about.. (6/11/23)
  • 3. Summary • What is generative AI? • Risks and harms • Global models for governance • Assessment • The future for Australia?
  • 4. A.What is generative AI? Generative AI; general purpose AI”; “foundation models”; “frontier AI”
  • 5. Types of large “foundation” models • GPT-3 (Open AI/Microsoft)(text prompt to text output)(June 2020) • “Large Language Model” or LLM • ChatGPT, November 2022 • GPT4, March 2023 • Multimodal, ChatGPT+ and via API • Integration into Bing • BARD (Google) • DALL-E 2, 3 (text to images – Google) • CoPilot (prompt generates computer code – GitHub/OpenAI) • Meta Make-me-A-Video (text to video - Meta) • Stable Diffusion (open source – text to image (T2I) from StabilityAI) • Midjourney (commercal API text to image (T2I() • HarmonAI – makes music from prompts • WuDao (LLM – China)
  • 7. B. What’s the problem? (a) Disinfo/misinfo/deepfakes
  • 12. P Hacker “The propensity of ChatGPT particularly to hallucinate when it does not find readymade answers can be exploited to generate text devoid of any connection to reality, but written in the style of utter confidence”
  • 15. (f) Competition/Anti-trust …and more (d)Labour issues (e)Environmental harms
  • 16. New York Times Who are we regulating? Ecology of upstream providers and downstream deployers
  • 17. C. Global AI Regulation & the rise of generative AI
  • 18. Models for governance of generative AI • Comprehensive, mandatory regimes eg EU AI Act • Vertical new regimes of regulation – China • No mandatory regulation but… +++ • No new regulation – UK • Pre-existing relevant laws • Eg Copyright • Eg Data protection • Voluntary guidelines – “co-regulatory” instruments • Private ordering
  • 19. Solution 1 : “comprehensive” - the EU AI Act – before gen AI AIA “risk based” approach • Unacceptable risk – ‘Complete’ prohibition, 4 examples – Article 5 • High-risk –Fixed categories of risky domains, based on intended use ; “essential requirements” including dataset quality, human oversight – • Limited risk – Transparency obligations for a few AI systems (chatbots, deepfakes, emotion ID, biometric categorisation) – Article 52 • Minimal risk – Codes of conduct – Article 69 Photo Source: European Commission, Digital Strategy Website https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
  • 20. High-Risk AI systems (annex III) • Annex III - Services intended for these uses.. • Biometric identification and categorisation of natural persons; • Management and operation of critical infrastructure; • Education and vocational training; • Employment, workers management and access to self-employment; • Access to and enjoyment of essential private services and public services and benefit; • Law enforcement; • Migration, asylum and border control management; • Administration of justice and democratic processes • Recommender algorithms… (EU Parl)
  • 21. • Risk management system • Data and data governance ; (“data quality”) • Technical documentation • Transparency and provision of information to users; • Human oversight • “Human oversight shall aim at preventing or minimising the risks to health, safety or fundamental rights that may emerge when a high-risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse” • Accuracy, robustness and cybersecurity (see Edwards AI Act explainer for Ada Lovelace Institute, 2022) High-Risk AI systems (Requirements) – Arts. 8 - 15
  • 22. ChatGPT enters the chat… • No “intended use” !! • Definition of gen AI conflicted - OECD? Who regulates? • Should duties/liability lie on developers or deployers? • Practical power & profit benefit vs unforeseeable risk & restraint innovation • High risk, limited risk, the future of AI? • Copyright & creative industries special worries • Environmental special worries • Competition: peimarily a matter for the Digital Markets Act?? • Open vs proprietary models - risks & benefits
  • 23. EU Parliament proposed “Foundation model” duties, May 2023 • Pre-audit of design, testing etc to assess, remove or mitigate foreseeable risks • Data quality duties – as with high risk AI • Performance on metrics to be checked throughout lifecycle - by independent experts • Environmental by design • Information sharing to downstream deployers– but while maintaining trade secrets (?) • Be transparent as to copyright works used • Public register of foundation models
  • 24. Solutions 2 : New vertical rules - China Addiction? “prevent users' over-reliance upon or addiction to the AIGC” Censorship? “ensure AI generated content (AIGC) is legal, consistent with the core values of the socialist system and free from discrimination and fake information” Transparency+ “guide users to scientifically understand and rationally use the AIGC” License needed for providers from security services
  • 25. Solution 3(a) – no new law • UK “pro-innovation” “world-leading” approach • March 2023 White Paper • Running against EU trend towards hard regulation of AI • Sectoral regulators eg ICO, CMA, FCA to coordinate on regulation and apply crosscutting “principles” • No new powers, no new law.. So far • But UK industry selling into EU will still need to meet EU AIA standards! • ?? AI Safety Institute = global hub for[what type of?] regulation?! Law haven??
  • 26. • Discrimination/ equality law • Liability/ product liability • Product safety • Copyright • Andersen v. Stability AI • Content regulation, hate speech, libel, advertising, fake news • Privacy & data protection • Italy Garantie Chat GPT ban • Competition law • Labour law • Human rights Solution 3(b) : Pre-existing laws
  • 27. Solution 3( c) – international voluntary principles • OECD Principles on AI, 2019 • “Four years on” review, 2023 • G7/GPAI/OECD “Hiroshima” Principles for “Advanced AI” • Guiding principles • Code of Conduct • “risk based approach” • Endorsed EU , 30 October 2023  • Identify, evaluate, and mitigate risks across the AI lifecycle • Publicly report advanced AI systems’ capabilities, limitations and domains of appropriate and inappropriate use • Information sharing across countries and reporting of incidents • Implement robust security controls across AI lifecycle
  • 29. US Draft Blueprint for AI Bill of Rights, Oct 2022 & NIST standards work Solution 3 ( c) : National voluntary principles : US
  • 30. • US Biden/Harris Executive Order Nov 2023 • Looks good but only actually binds public agencies where President had pre existing powers • US’s own AI Safety Institute • Practical, near term emphasis • generative misinformation ; • watermarking for transparency; • AI generated robocalls deceiving elderly • Content moderation • Will operationalize NIST technical framework for reducing AI risks • Using govt procurement tools
  • 31. Private ordering: empirical work, Jan-Mar 2023 Copyright (eg service) Privacy & data protection Consumer protection& dispute resolution Illegal/harmful content; content moderation; misinfo Text to text (language models) Egs ChatGPT 4.0, Clova (SKorea) 6 models Text to image (T2I) Egs Lensa, DALL-E 2.0 4 models Text to video/audio Synthesia (T2Audio) 6 models • T&C routinely assign risk and liability for harmful or infringing outputs entirely to users • “Platformisation” • Positioned as intermediary services, like social media or search, even though actually publishers ? • But evade incoming regulation of “traditional”platforms
  • 32. Australia? • “Safe and Responsible AI in Australia”, Dept of Industry, June 2023 • Domestic sectoral contributions eg eSafety Commissioner on deepfake CSAM • Takeaways? • Complex governance landscape • Key areas; competition, govt use of AI • “relatively small, open economy” • -> Global governance harmonisation important • Spectrum from voluntary to mandatory --leans towards former • “Risk management” toolkits for risk- based cases - clever pick n mix!
  • 33. Assessment of global governance of genAI • Will the EU AIA become a “gold standard” global model, like the GDPR? • “General purpose AI” section rushed & fits badly w basic framework of AIA & EU digital acquis • Over-regulation? Too soon? Anti-innovation? • But we said that about social media & search – get in fast this time! • Voluntary / self regulation models • Do we need international harmonization on AI? If so, a race to the bottom – vague promises eg Bletchley - or the top – a Council of Europe Convention on AI model? FRIA? • Is debate on AI safety being being fanned by tech industry as self regulatory strategy - to avoid the detailed regulation of current harms the AIA envisages, in favour of future unlikely risks? • Self and co-regulation mainly negotiated between industry and politicians, for growth, not rights • Where is the voice of those most affected by AI harms?? UK AI Summit had zero UK civil society at it. Almost no Global South. • “Global” governance? • Rule by rich white technocratic Western men rather than states? • Tech industry as the new states? • Transnational conflicts loom: A “Beijing” effect as well as a “Brussels” effect?

Editor's Notes

  • #5: Chatbots, journalism, idea generation, copywriting, coding assistance, summarizing, virtual meeting bot, AI friends
  • #13: P Hacker “The propensity of ChatGPT particularly to hallucinate when it does not find readymade answers can be exploited to generate text devoid of any connection to reality, but written in the style of utter confidence”
  • #16: Access to data, compute, fine tuning resources Computationally expensive and retraining slow -> large tech co dominance GPT-4 training cost >$100mn environmentally worrying
  • #18: EU – soft power over US; China – keeping control over data, info 7 its tech industry; US – trad permissionless innovatin but now AGI
  • #23: Issues : over-inclusive (no emphasis generality in “ability, task or output”); important for classification ->
  • #25: Recommender algos; synthetic ai; generative ai 
  • #30: Note OECD Princs on AI 2019 endorsed by 46 countries, see 2022 follow up report
  • #34: If the key US providers worried about x-risk are so worried (Open AI, Anthropic), why don’t they stop?