SlideShare a Scribd company logo
Compliance in AI:
Policies and Best
Practices
Facebook Group
www.facebook.com/groups/joinvbout
Follow our TikTok
www.tiktok.com/@vboutinc
20+K MEMBERS - 10 YEARS
Our Community
⚬ General Intro
⚬ Guest Presentation
⚬ Current AI laws
⚬ Q&A
Agenda
08. Social media listening
09. Social publishing
10. E-commerce integration
11. Cross-channel analytics
12. Pipeline management
13. Calendar booking
14. Multi-channel AI chatbot
01. Landing pages
02. Form builder
03. Popups and site messages
04. Powerful lead tracking & scoring
05. Visual marketing automation
06. Predictive email campaigns
07. AI content assistant
14+ Core Tools Working Together
Our Speakers
Linkedin - Email
Full Product Video Tour
VBOUT in One Minute
Importance of
Compliance in AI
Importance of AI Compliance
• AI regulation is one topic that unites
governments and industries
internationally.
• Standards are being created to ensure
responsible AI adoption with a focus on
accountability, transparency, and
fairness.
• Key concerns include mitigating bias,
privacy violations, and ethical misuse of
AI.
• 39% of U.S businesses are investing in
responsible AI practices to meet
regulatory requirements (Source: PWC).
Source: PWC
The AI Laws and
Regulations
EU AI Act
• The EU AI Act is the most comprehensive AI
regulation, adopted in 2024.
• It introduces compliance requirements based
on the risk AI systems pose to humans and
fundamental rights.
• The Act uses a risk-based approach with 3
levels of risk: minimal, high, and
unacceptable.
• AI systems interacting with users must follow
transparency obligations.
• Full compliance will phase in over 36 months
to allow businesses to adapt.
Source: CEPS
Penalties Under the EU AI Act
• Severe violations (Prohibited AI
systems): Fines up to €35 million or
7% of worldwide annual turnover,
whichever is higher.
• Lesser violations (misleading or
incomplete information): Fines up to
€7.5 million or 1% global annual
turnover, whichever is higher.
• Penalties apply to providers,
deployers, distributors, and notified
bodies.
The White House Blueprint for an AI Bill of Rights
• It focuses on making sure AI systems are
safe and effective.
• The main purposes are:
• Preventing algorithmic discrimination.
• Protecting data privacy.
• Offering clear notice and explanation.
about their use.
• Providing human alternatives and
fallback options.
• The aim is to uphold civil rights, equal
opportunity and democratic values in
deploying and using AI technologies.
Source: Enterra Solutions
Colorado Privacy Act
• Colorado SB24-205 (Effective 2026): First U.S.
law regulating AI systems, focusing on
preventing algorithmic discrimination.
• Developers must assess and mitigate bias,
while developers must ensure transparency
and accountability.
• SB21-169 (Insurers and AI): Insurers using AI
must prevent unfair discrimination.
• They must implement governance, conduct
testing, and report to the Division of
Insurance to protect consumers from
discriminatory AI outcomes.
AI Video Interview Act – Illinois, USA
• A state law regulating the use of AI in
video interviews for job applicants.
• Companies using AI in hiring
processes must comply with the
Illinois AI Video Interview Act.
• Biometric Information Privacy Act
(BIPA), which governs the collection
and storage of biometric data,
including data used in AI systems.
Canada’s AI and Data Act (AIDA) 2024
• This act mandates a rigorous
assessment and mitigation of risks for
high-impact AI systems, ensuring
adherence to safety and ethical
guidelines.
• Entities under AIDA are required to
conduct risk assessments, establish
risk mitigation measures, and ensure
continuous monitoring.
• They should publicly disclose
information about the functioning,
intended use and risk management of
high-impact
AI systems.
Source: techstrong.ai
China’s New Generative AI Measures
• These measures apply to the use of generative
AI that provides services to the “public” within
the territories of China.
• They require service providers to:
• Protect users’ input information and usage
records.
• Collect and retain personal information in
accordance with the principles of
minimization and necessity.
• Establish mechanisms for handling
complaints and requests to promptly reply
to individuals requests for correcting,
deleting, or masking their personal info.
Source: CryptoSlate
Countries With AI Framework
• Brazil AI Bill of Rights
• UK AI Safety Summit
• Saudi Arabia AI Ethics Principles
(SDAIA)
• Australia AI Ethics Framework
• South Korea National Strategy for AI
Source: techstrong.ai
How Does this Affect
US!
Potential AI Misuse
• Some AI apps, like FaceApp, use AI to
enhance photos but raise concerns about
data privacy.
• There was a controversy over the app’s
privacy policy, which claimed debates over
data ownership.
• Questions arose about who controls and
owns user data when using AI-based
services.
AI and Photo Privacy
Source: Qualcomm
• Deepfake technology creates fake
videos and poses serious security
threats.
• Experts warn deepfakes could spread
false information, undermining public
trust.
• This technology may threaten
national security and democracy if
misused by bad actors.
Deepfakes and Security Threats
Source: Adobe
• Amazon’s AI hiring tool was biased
against women because it learned
from data reflecting a male-
dominated tech industry.
• The AI system favored male
candidates, highlighting how biased
data can lead to unfair outcomes.
• Amazon discontinued the tool,
understanding the need to address
biases to prevent workplace
discrimination.
AI Bias in Hiring (Amazon 2018)
Implementing
Compliance Measures
Within AI
Evaluate Ethical Impacts and Ensure Transparency
• Conduct ethical assessments to address
potential biases and societal impacts in AI
models.
• Keep detailed records of AI operations for
compliance.
• Clearly inform users when AI makes
decisions and how it impacts them.
• For high-risk AI applications, perform
DPIAs to assess how the system could
impact user privacy and compliance.
• This is especially important under GDPR
and other global privacy regulations.
Apply Data Privacy and Protection Measures
• Only gather the data necessary for
the AI system’s purpose.
• Manage consent by obtaining and
allowing easy withdrawal of user
consent.
• Use anonymization or
pseudonymization to protect personal
data.
• Enable data access: Provide users
with the ability to access, correct,
delete or transfer their data.
Source: Twelvesec
Train Employees and Ensure Accountability
• Educate your staff, providing regular
training on AI compliance and ethical
practices.
• Define clear roles and responsibilities
for individuals monitoring AI system
outputs.
• Develop accountability frameworks
that include audit trails, error tracking
and bias detection mechanisms.
Use Privacy-Enhancing Technologies (PETs)
• Incorporate differential privacy to
ensure that individual data are
protected while allowing AI systems
to analyze aggregate data for
insights.
• Use federated learning to train AI
models on decentralized data sources
without needing to move personal
data.
• Ensure all personal data used in AI
systems is encrypted both at rest and
in transit to protect against
unauthorized access.
Source: A-Team Insight
Continuously Monitor and Improve AI Systems
• Deploy monitoring tools and
techniques such as real-time analytics
and performance dashboards to
identify and address deviations
promptly.
• Conduct periodic reviews and updates
of your AI compliance program to
address emerging regulatory
changes.
Additional Resources and Tools
• Sample DPIA Template
• AI Compliance Checklist
• OneTrust (Data Protection Tool)
• TrustArc (Data Protection Tool)
• IBM AI Fairness 360 (Bias Detection
Tool)
• Google’s TensorFlow Privacy (Privacy-
Enhancing Tool for Differential
Privacy)
• PySyft (Privacy-Enhancing Tool for
Federated Learning)
Source: ico.org.uk
BOOK YOUR PERSONALIZED DEMO TODAY!
Book Your Demo
Thank You

More Related Content

PDF
Artificial Intelligence and Machine Learning
PDF
Building AI Startups & AI Mindset
PDF
Cavalry Ventures | Deep Dive: Generative AI
PDF
The future of ai ethics
PDF
Unlocking the Power of Generative AI An Executive's Guide.pdf
PPTX
Artificial Intelligence In The Workplace: How AI Is Transforming Your Employe...
PDF
The Future is in Responsible Generative AI
PDF
AI - The Good, Bad and scary truth of Super Intelligence
Artificial Intelligence and Machine Learning
Building AI Startups & AI Mindset
Cavalry Ventures | Deep Dive: Generative AI
The future of ai ethics
Unlocking the Power of Generative AI An Executive's Guide.pdf
Artificial Intelligence In The Workplace: How AI Is Transforming Your Employe...
The Future is in Responsible Generative AI
AI - The Good, Bad and scary truth of Super Intelligence

What's hot (20)

PDF
GENERATIVE AI, THE FUTURE OF PRODUCTIVITY
PDF
Dr. Harvey Castro - GPT Healthcare.pdf
PDF
Leveraging Generative AI & Best practices
PDF
AI Governance – The Responsible Use of AI
PDF
Exploring Opportunities in the Generative AI Value Chain.pdf
PDF
The bright and dark side of AI
PPTX
Generative AI Risks & Concerns
PPTX
Artificial intelligence
PPTX
AI FOR BUSINESS LEADERS
PDF
CB Insights | AI in Healthcare
PDF
AI Restart 2024: Lukáš Kostka - Automatizace analýzy klíčových slov aneb změn...
PDF
The age of GANs
PPTX
Vulnerability in ai
PPTX
Artificial intelligence
PDF
Artificial intelligence - A human revolution
PDF
Artificial Intelligence Roadmap 2021-2025
PDF
Using the power of Generative AI at scale
PDF
The current state of generative AI
PDF
AI (1).pdf
PDF
Generative AI: Past, Present, and Future – A Practitioner's Perspective
GENERATIVE AI, THE FUTURE OF PRODUCTIVITY
Dr. Harvey Castro - GPT Healthcare.pdf
Leveraging Generative AI & Best practices
AI Governance – The Responsible Use of AI
Exploring Opportunities in the Generative AI Value Chain.pdf
The bright and dark side of AI
Generative AI Risks & Concerns
Artificial intelligence
AI FOR BUSINESS LEADERS
CB Insights | AI in Healthcare
AI Restart 2024: Lukáš Kostka - Automatizace analýzy klíčových slov aneb změn...
The age of GANs
Vulnerability in ai
Artificial intelligence
Artificial intelligence - A human revolution
Artificial Intelligence Roadmap 2021-2025
Using the power of Generative AI at scale
The current state of generative AI
AI (1).pdf
Generative AI: Past, Present, and Future – A Practitioner's Perspective
Ad

Similar to Compliance in AI: Policies and Best Practices (20)

PDF
10 Key Challenges for AI within the EU Data Protection Framework.pdf
PDF
Balancing Data Protection and Artificial Intelligence
PDF
Richard van der Velde, Technical Support Lead for Cookiebot @CMP – “Artificia...
PPTX
[DSC DACH 23] AI Regulation - How to implement AI legally compliant? - Alexan...
PDF
TrustArc Webinar - Data Privacy Management in the Age of AI
PDF
TrustArc Webinar - AI Governance: Managing AI Risk
PDF
TrustArc Webinar - Artificial Intelligence Bill of Rights: Impacts on AI Gove...
PPTX
Tendencias globales en la regulación de la IA y estándares tecnológicos asoci...
PPTX
AI and Robotics policy overview - Adam Thierer (Aug 2022)
PPTX
Ethics in Artificial Intelligence: Challenges and Solutions Explores ethical...
PPTX
Ethical Dimensions of Artificial Intelligence (AI) by Rinshad Choorappara
PDF
A full analysis of the available Security Frameworks for AI
PDF
What regulation for Artificial Intelligence?
PDF
ChatGPT, Generative AI Data Security Considerations
PPTX
Research Summit 2025 - Path to AI Act.pptx
PPTX
ArtificialIntelligence_presentation.pptx
PPTX
The potential impact of legislation on AI and Machine Learning (New Zealand f...
PDF
Both Feet on the Ground - Generative Artificial Intelligence
PPTX
Current Regulations in Artificial Intelligence (AI).pptx
PPTX
it-Condust-an-AI-Privacy-Risk-Assessment-Phases-1-3.pptx
10 Key Challenges for AI within the EU Data Protection Framework.pdf
Balancing Data Protection and Artificial Intelligence
Richard van der Velde, Technical Support Lead for Cookiebot @CMP – “Artificia...
[DSC DACH 23] AI Regulation - How to implement AI legally compliant? - Alexan...
TrustArc Webinar - Data Privacy Management in the Age of AI
TrustArc Webinar - AI Governance: Managing AI Risk
TrustArc Webinar - Artificial Intelligence Bill of Rights: Impacts on AI Gove...
Tendencias globales en la regulación de la IA y estándares tecnológicos asoci...
AI and Robotics policy overview - Adam Thierer (Aug 2022)
Ethics in Artificial Intelligence: Challenges and Solutions Explores ethical...
Ethical Dimensions of Artificial Intelligence (AI) by Rinshad Choorappara
A full analysis of the available Security Frameworks for AI
What regulation for Artificial Intelligence?
ChatGPT, Generative AI Data Security Considerations
Research Summit 2025 - Path to AI Act.pptx
ArtificialIntelligence_presentation.pptx
The potential impact of legislation on AI and Machine Learning (New Zealand f...
Both Feet on the Ground - Generative Artificial Intelligence
Current Regulations in Artificial Intelligence (AI).pptx
it-Condust-an-AI-Privacy-Risk-Assessment-Phases-1-3.pptx
Ad

More from Vbout.com (20)

PPTX
May 2025 Partners Meeting Presentation Final.pptx
PPTX
VBOUT Partners Meeting Group - April 2025
PPTX
How to Turn Social Engagement into Smart Automation
PPTX
March 2025 VBOUT Partners Meeting Group
PPTX
February 2025 VBOUT Partners Meeting Group
PPTX
January 2025 VBOUT Partners Meeting Group
PPTX
December 2024 VBOUT Partners Meeting Group
PPTX
Digital Marketing Trends for 2025 - What's Ahead
PPTX
October 2024 - VBOUT Partners Meeting Group
PPTX
Email Marketing in the Age of AI - Strategies and Hacks to Optimize Engagement
PPTX
September 2024 - VBOUT Partners Meeting Group
PPTX
August 2024 - VBOUT Partners Meeting Group
PPTX
July 2024 - VBOUT Partners Meeting Group
PPTX
How to Build AI Chatbots in Minutes and Deploy Across Multi-Channels
PPTX
June 2024 - VBOUT Partners Meeting Group
PPTX
How to Maximize Sales Using Social Commerce
PPTX
May 2024 - VBOUT Partners Meeting Group Session
PPTX
Optimizing Your Marketing with AI-Powered Prompts
PPTX
April 2024 - VBOUT Partners Meeting Group
PPTX
March 2024 VBOUT Partners Meeting Group
May 2025 Partners Meeting Presentation Final.pptx
VBOUT Partners Meeting Group - April 2025
How to Turn Social Engagement into Smart Automation
March 2025 VBOUT Partners Meeting Group
February 2025 VBOUT Partners Meeting Group
January 2025 VBOUT Partners Meeting Group
December 2024 VBOUT Partners Meeting Group
Digital Marketing Trends for 2025 - What's Ahead
October 2024 - VBOUT Partners Meeting Group
Email Marketing in the Age of AI - Strategies and Hacks to Optimize Engagement
September 2024 - VBOUT Partners Meeting Group
August 2024 - VBOUT Partners Meeting Group
July 2024 - VBOUT Partners Meeting Group
How to Build AI Chatbots in Minutes and Deploy Across Multi-Channels
June 2024 - VBOUT Partners Meeting Group
How to Maximize Sales Using Social Commerce
May 2024 - VBOUT Partners Meeting Group Session
Optimizing Your Marketing with AI-Powered Prompts
April 2024 - VBOUT Partners Meeting Group
March 2024 VBOUT Partners Meeting Group

Recently uploaded (20)

PPTX
Presentation - MindfulHeal Digital Ayurveda GTM & Marketing Plan.pptx
PDF
NeuroRank™: The Future of AI-First SEO..
PDF
Mastering the Art of the Prompt - Brantley Smith, HomePro Marketing
PDF
UNIT 2 - 5 DISTRIBUTION IN RURAL MARKETS.pdf
PDF
exceptionalinsights.group visitor traffic statistics 08-08-25
PDF
Unit 1 -2 THE 4 As of RURAL MARKETING MIX.pdf
PDF
Digital Marketing Agency in Thrissur with Proven Strategies for Local Growth
PDF
Hidden gems in Microsoft ads with Navah Hopkins
DOCX
Parkville marketing plan .......MR.docx
PPTX
UNIT 3 - 5 INDUSTRIAL PRICING.ppt x
PDF
AFCAT Syllabus 2026 Guide by Best Defence Academy in Lucknow.pdf
PPTX
Your score increases as you pick a category, fill out a long description and ...
PDF
Mastering Content Strategy in 2025 ss.pdf
PDF
EVOLUTION OF RURAL MARKETING IN INDIAN CIVILIZATION
PPTX
PRINCIPLES OF MANAGEMENT and functions (1).pptx
PPTX
Solomon_Chapter 6_The Self: Mind, Gender, and Body.pptx
PPTX
Mastering eCommerce SEO: Strategies to Boost Traffic and Maximize Conversions
PPTX
Ranking a Webpage with SEO (And Tracking It with the Right Attribution Type a...
PDF
How the Minnesota Vikings Used Community to Drive 170% Growth and Acquire 34K...
PPTX
Fixing-AI-Hallucinations-The-NeuroRanktm-Approach.pptx
Presentation - MindfulHeal Digital Ayurveda GTM & Marketing Plan.pptx
NeuroRank™: The Future of AI-First SEO..
Mastering the Art of the Prompt - Brantley Smith, HomePro Marketing
UNIT 2 - 5 DISTRIBUTION IN RURAL MARKETS.pdf
exceptionalinsights.group visitor traffic statistics 08-08-25
Unit 1 -2 THE 4 As of RURAL MARKETING MIX.pdf
Digital Marketing Agency in Thrissur with Proven Strategies for Local Growth
Hidden gems in Microsoft ads with Navah Hopkins
Parkville marketing plan .......MR.docx
UNIT 3 - 5 INDUSTRIAL PRICING.ppt x
AFCAT Syllabus 2026 Guide by Best Defence Academy in Lucknow.pdf
Your score increases as you pick a category, fill out a long description and ...
Mastering Content Strategy in 2025 ss.pdf
EVOLUTION OF RURAL MARKETING IN INDIAN CIVILIZATION
PRINCIPLES OF MANAGEMENT and functions (1).pptx
Solomon_Chapter 6_The Self: Mind, Gender, and Body.pptx
Mastering eCommerce SEO: Strategies to Boost Traffic and Maximize Conversions
Ranking a Webpage with SEO (And Tracking It with the Right Attribution Type a...
How the Minnesota Vikings Used Community to Drive 170% Growth and Acquire 34K...
Fixing-AI-Hallucinations-The-NeuroRanktm-Approach.pptx

Compliance in AI: Policies and Best Practices

  • 1. Compliance in AI: Policies and Best Practices
  • 2. Facebook Group www.facebook.com/groups/joinvbout Follow our TikTok www.tiktok.com/@vboutinc 20+K MEMBERS - 10 YEARS Our Community
  • 3. ⚬ General Intro ⚬ Guest Presentation ⚬ Current AI laws ⚬ Q&A Agenda
  • 4. 08. Social media listening 09. Social publishing 10. E-commerce integration 11. Cross-channel analytics 12. Pipeline management 13. Calendar booking 14. Multi-channel AI chatbot 01. Landing pages 02. Form builder 03. Popups and site messages 04. Powerful lead tracking & scoring 05. Visual marketing automation 06. Predictive email campaigns 07. AI content assistant 14+ Core Tools Working Together
  • 6. Full Product Video Tour VBOUT in One Minute
  • 8. Importance of AI Compliance • AI regulation is one topic that unites governments and industries internationally. • Standards are being created to ensure responsible AI adoption with a focus on accountability, transparency, and fairness. • Key concerns include mitigating bias, privacy violations, and ethical misuse of AI. • 39% of U.S businesses are investing in responsible AI practices to meet regulatory requirements (Source: PWC). Source: PWC
  • 9. The AI Laws and Regulations
  • 10. EU AI Act • The EU AI Act is the most comprehensive AI regulation, adopted in 2024. • It introduces compliance requirements based on the risk AI systems pose to humans and fundamental rights. • The Act uses a risk-based approach with 3 levels of risk: minimal, high, and unacceptable. • AI systems interacting with users must follow transparency obligations. • Full compliance will phase in over 36 months to allow businesses to adapt. Source: CEPS
  • 11. Penalties Under the EU AI Act • Severe violations (Prohibited AI systems): Fines up to €35 million or 7% of worldwide annual turnover, whichever is higher. • Lesser violations (misleading or incomplete information): Fines up to €7.5 million or 1% global annual turnover, whichever is higher. • Penalties apply to providers, deployers, distributors, and notified bodies.
  • 12. The White House Blueprint for an AI Bill of Rights • It focuses on making sure AI systems are safe and effective. • The main purposes are: • Preventing algorithmic discrimination. • Protecting data privacy. • Offering clear notice and explanation. about their use. • Providing human alternatives and fallback options. • The aim is to uphold civil rights, equal opportunity and democratic values in deploying and using AI technologies. Source: Enterra Solutions
  • 13. Colorado Privacy Act • Colorado SB24-205 (Effective 2026): First U.S. law regulating AI systems, focusing on preventing algorithmic discrimination. • Developers must assess and mitigate bias, while developers must ensure transparency and accountability. • SB21-169 (Insurers and AI): Insurers using AI must prevent unfair discrimination. • They must implement governance, conduct testing, and report to the Division of Insurance to protect consumers from discriminatory AI outcomes.
  • 14. AI Video Interview Act – Illinois, USA • A state law regulating the use of AI in video interviews for job applicants. • Companies using AI in hiring processes must comply with the Illinois AI Video Interview Act. • Biometric Information Privacy Act (BIPA), which governs the collection and storage of biometric data, including data used in AI systems.
  • 15. Canada’s AI and Data Act (AIDA) 2024 • This act mandates a rigorous assessment and mitigation of risks for high-impact AI systems, ensuring adherence to safety and ethical guidelines. • Entities under AIDA are required to conduct risk assessments, establish risk mitigation measures, and ensure continuous monitoring. • They should publicly disclose information about the functioning, intended use and risk management of high-impact AI systems. Source: techstrong.ai
  • 16. China’s New Generative AI Measures • These measures apply to the use of generative AI that provides services to the “public” within the territories of China. • They require service providers to: • Protect users’ input information and usage records. • Collect and retain personal information in accordance with the principles of minimization and necessity. • Establish mechanisms for handling complaints and requests to promptly reply to individuals requests for correcting, deleting, or masking their personal info. Source: CryptoSlate
  • 17. Countries With AI Framework • Brazil AI Bill of Rights • UK AI Safety Summit • Saudi Arabia AI Ethics Principles (SDAIA) • Australia AI Ethics Framework • South Korea National Strategy for AI Source: techstrong.ai
  • 18. How Does this Affect US!
  • 20. • Some AI apps, like FaceApp, use AI to enhance photos but raise concerns about data privacy. • There was a controversy over the app’s privacy policy, which claimed debates over data ownership. • Questions arose about who controls and owns user data when using AI-based services. AI and Photo Privacy Source: Qualcomm
  • 21. • Deepfake technology creates fake videos and poses serious security threats. • Experts warn deepfakes could spread false information, undermining public trust. • This technology may threaten national security and democracy if misused by bad actors. Deepfakes and Security Threats Source: Adobe
  • 22. • Amazon’s AI hiring tool was biased against women because it learned from data reflecting a male- dominated tech industry. • The AI system favored male candidates, highlighting how biased data can lead to unfair outcomes. • Amazon discontinued the tool, understanding the need to address biases to prevent workplace discrimination. AI Bias in Hiring (Amazon 2018)
  • 24. Evaluate Ethical Impacts and Ensure Transparency • Conduct ethical assessments to address potential biases and societal impacts in AI models. • Keep detailed records of AI operations for compliance. • Clearly inform users when AI makes decisions and how it impacts them. • For high-risk AI applications, perform DPIAs to assess how the system could impact user privacy and compliance. • This is especially important under GDPR and other global privacy regulations.
  • 25. Apply Data Privacy and Protection Measures • Only gather the data necessary for the AI system’s purpose. • Manage consent by obtaining and allowing easy withdrawal of user consent. • Use anonymization or pseudonymization to protect personal data. • Enable data access: Provide users with the ability to access, correct, delete or transfer their data. Source: Twelvesec
  • 26. Train Employees and Ensure Accountability • Educate your staff, providing regular training on AI compliance and ethical practices. • Define clear roles and responsibilities for individuals monitoring AI system outputs. • Develop accountability frameworks that include audit trails, error tracking and bias detection mechanisms.
  • 27. Use Privacy-Enhancing Technologies (PETs) • Incorporate differential privacy to ensure that individual data are protected while allowing AI systems to analyze aggregate data for insights. • Use federated learning to train AI models on decentralized data sources without needing to move personal data. • Ensure all personal data used in AI systems is encrypted both at rest and in transit to protect against unauthorized access. Source: A-Team Insight
  • 28. Continuously Monitor and Improve AI Systems • Deploy monitoring tools and techniques such as real-time analytics and performance dashboards to identify and address deviations promptly. • Conduct periodic reviews and updates of your AI compliance program to address emerging regulatory changes.
  • 29. Additional Resources and Tools • Sample DPIA Template • AI Compliance Checklist • OneTrust (Data Protection Tool) • TrustArc (Data Protection Tool) • IBM AI Fairness 360 (Bias Detection Tool) • Google’s TensorFlow Privacy (Privacy- Enhancing Tool for Differential Privacy) • PySyft (Privacy-Enhancing Tool for Federated Learning) Source: ico.org.uk
  • 30. BOOK YOUR PERSONALIZED DEMO TODAY! Book Your Demo

Editor's Notes

  • #11: In Europe, the dominant source of AI governance is the European Union's Artificial Intelligence Act, which went into effect on August 1, 2024 and applies to all 27 member countries. It is expected to be fully implemented by August 2, 2026. Three key objectives: - Ensure trustworthy and ethical AI development. - Mitigate risks like bias and discrimination. - Promote transparency, accountability, and human oversight. Unacceptable risk Unacceptable risk AI systems are systems considered a threat to people and will be banned. They include: Cognitive behavioural manipulation of people or specific vulnerable groups: for example voice-activated toys that encourage dangerous behaviour in children Social scoring: classifying people based on behaviour, socio-economic status or personal characteristics Biometric identification and categorisation of people Real-time and remote biometric identification systems, such as facial recognition Some exceptions may be allowed for law enforcement purposes. “Real-time” remote biometric identification systems will be allowed in a limited number of serious cases, while “post” remote biometric identification systems, where identification occurs after a significant delay, will be allowed to prosecute serious crimes and only after court approval. High risk AI systems that negatively affect safety or fundamental rights will be considered high risk and will be divided into two categories: 1) AI systems that are used in products falling under the EU’s product safety legislation. This includes toys, aviation, cars, medical devices and lifts. 2) AI systems falling into specific areas that will have to be registered in an EU database: Management and operation of critical infrastructure Education and vocational training Employment, worker management and access to self-employment Access to and enjoyment of essential private services and public services and benefits Law enforcement Migration, asylum and border control management Assistance in legal interpretation and application of the law. All high-risk AI systems will be assessed before being put on the market and also throughout their lifecycle. People will have the right to file complaints about AI systems to designated national authorities. The lowest level of risk described by the EU AI Act is minimal risk. This level includes all other AI systems that do not fall under the above-mentioned categories, such as a spam filter. AI systems under minimal risk do not have any restrictions or mandatory obligations. However, it is suggested to follow general principles such as human oversight, non-discrimination, and fairness.
  • #12: Penalties for breaking the law: The regulation sets forward a three-tier structure for fines against infringements by AI system operators in general, supplemented by additional provisions for providers of the GPAI models and for Union agencies. The heftiest fines are imposed for violations related to prohibited systems of up to €35,000,00 or 7% of worldwide annual turnover for the preceding financial year, whichever is higher. The lowest penalties for AI operators are for providing incorrect, incomplete, or misleading information, up to €7,500,000 or 1% of total worldwide annual turnover for the preceding financial year, whichever is higher. Penalties for non-compliance can be issued to providers, deployers, importers, distributors, and notified bodies.
  • #13: https://www.whitehouse.gov/ostp/ai-bill-of-rights/
  • #14: Colorado SB24-205 & SB21-169 In May 2024, Colorado became the first U.S. state to enact a comprehensive law governing AI systems with the passing of SB24-205. All requirements under the law will become effective on February 1, 2026. The law seeks to regulate algorithmic discrimination in AI systems and requires developers and deployers of high-risk AI systems to document AI system capabilities, limitations, and potential impacts on individuals and society. Developers must conduct bias and discrimination risk assessments and implement measures to mitigate these risks over the course of AI system lifecycles. Deployers must disclose AI use to consumers and ensure transparency and accountability, including providing explanations for AI decisions that affect individuals. SB24-205 also emphasizes data privacy and mandates compliance with existing laws. SB21-169 mandates that insurers using AI systems that rely on consumer data must ensure the systems do not result in unfair discrimination. Enacted on July 6, 2021, the law requires insurers to establish a governance and risk management framework, conduct regular testing, and report their findings to the Colorado Division of Insurance. The law seeks to protect consumers by holding insurers accountable for any discriminatory outcomes produced by their AI systems and requires corrective actions if discrimination is detected​. https://www.workforcebulletin.com/colorados-historic-sb-24-205-concerning-consumer-protections-in-interactions-with-ai-signed-into-law-after-passing-state-senate-and-house#:~:text=SB%2024%2D205%20will%20become,%E2%80%94including%20employment%2Drelated%20decisions.
  • #15: https://www.ilga.gov/legislation/ilcs/ilcs3.asp?ActID=4015&ChapterID=68
  • #16: https://coxandpalmerlaw.com/publication/aida-2024/
  • #17: Service Providers are required to manage the use of the services by: - Taking measures to prevent excessive dependence on or addiction to the services; - Guiding the users to understand and use Generative AI in a scientific, rational and legal manner; Suspending or ceasing services to users, if they are found to have violated laws or regulations, commercial ethics or social morality. https://www.twobirds.com/en/insights/2023/china/what-you-need-to-know-about-china%E2%80%99s-new-generative-ai-measures
  • #18: https://coxandpalmerlaw.com/publication/aida-2024/
  • #21: What is FaceApp? FaceApp is a popular mobile app that uses AI to edit and enhance photos. For example, it can make you look older, younger, or change your hairstyle or facial expressions. These features became viral, making the app widely popular. Where was the privacy concern? The controversy arose from FaceApp's privacy policy. When people uploaded their photos to the app for editing, the policy seemed to indicate that the company could store, use, or even own the photos users uploaded. People became concerned that their personal photos could be used in ways they didn't agree to, such as being shared with third parties, used for marketing, or even sold. What was the issue with data ownership? The heart of the debate was: Who owns your data (photos) once you upload it to an AI-powered service like FaceApp? The app’s policy suggested that they could keep or use those images long after users deleted the app or finished editing their photos, which made people uneasy about losing control over their personal data. An Example of the Controversy: Let's say you upload a selfie to FaceApp to see how you might look in 30 years. After the AI enhances the photo, you’re done using the app. However, because of the privacy policy, FaceApp might still have access to that photo, even if you delete the app. The concern is that FaceApp, or any company with similar policies, could potentially use or sell that photo to advertisers or other companies without your consent. The Broader Debate: This incident led to broader discussions about how AI companies should handle user data. People started questioning whether AI apps should have access to or ownership of personal data (like photos), and how transparent companies should be about how they use that data. It also raised concerns about data security: if a company is hacked, those photos and personal information could be exposed.
  • #22: What are Deepfakes? Deepfakes are AI-generated videos or images that look incredibly real. For example, the AI can take a video of one person and replace their face with another person’s face or alter their voice to make it seem like they said something they didn’t. These videos can be so realistic that it’s difficult for people to tell they’re fake. How Do Deepfakes Work? Deepfake technology uses deep learning, a form of AI, to analyze many images or videos of a person, learning their facial features, voice patterns, and movements. Once the AI has enough data, it can generate a new video or image where that person appears to be doing or saying something they never actually did. Why Are They Dangerous? The danger comes from how deepfakes can be used to spread misinformation or cause harm. For instance, someone could create a deepfake video of a politician saying something controversial or damaging that they never said. If this fake video is released to the public, it could cause widespread confusion, distrust, and even harm national security if people believe the false information. Deepfakes can also be used in other ways, such as creating fake evidence in legal cases or manipulating people by making it seem like someone said something they never did. An Example of a Deepfake Threat: Imagine a deepfake video is created showing a world leader announcing a fake military attack. If this video spreads before it’s proven false, it could lead to panic, confusion, or even escalate tensions between countries. This is why deepfakes are considered a significant threat to security and democracy. People may stop trusting videos or media because they can't tell what’s real and what’s fake. Deepfakes and Public Trust: Experts warn that deepfakes could be used by governments or bad actors to undermine public trust in institutions like the media, governments, or even scientific bodies. If people can’t trust what they see or hear, it becomes much easier to spread false information or create doubt about legitimate events. In the wrong hands, this technology could destabilize societies by eroding the trust people have in the information they receive.
  • #25: Bias Detection and Mitigation: Regularly audit AI systems to identify and mitigate biases, especially in marketing, hiring, or customer service applications. Bias can lead to unfair outcomes and non-compliance with ethical AI standards. Use diverse datasets and validate AI models to prevent discrimination against certain groups or demographics. Fairness in AI Decision-Making: Ensure that AI models treat all users equally and do not introduce biased outcomes based on sensitive attributes (e.g., gender, race, or socioeconomic status). Conduct fairness audits as part of regular AI system reviews.   A DPIA is a specific type of risk assessment that focuses on the data protection implications of an AI system, especially when it involves processing personal or sensitive data. The proactive AIA and DPIA will help CTOs and CIOs identify and mitigate AI technology's data privacy and compliance risks and determine the appropriate measures and safeguards to implement. https://dataprotection.ie/en/organisations/know-your-obligations/data-protection-impact-assessments#:~:text=Data%20Protection%20Impact%20Assessments%20can,Key%20points
  • #26: Use techniques like data anonymization or pseudonymization to protect personal information while still allowing data processing for AI models. Customer Awareness: Inform customers about how AI is being used in your services, especially for marketing purposes (e.g., personalized recommendations or automated chatbots). Provide them with clear information on how their data is being used.
  • #27: Staff Training on AI Compliance: Provide regular training for staff on AI regulations, ethical AI principles, and best practices for ensuring compliance. This is crucial for teams involved in marketing, data processing, and AI development. Educate employees on recognizing and mitigating biases in AI and data analytics.
  • #28: Fortify your AI systems with advanced cybersecurity measures, including secure multi-party computation, intrusion detection systems, and robust encryption protocols to defend against unauthorized access and cyber threats. In doing so, regularly conduct security audits and vulnerability assessments to ensure the resilience of AI infrastructure. Explanation of Key Terms Differential Privacy: Definition: A technique that adds statistical noise to data in a way that allows insights to be derived from the data without exposing individual data points. Purpose: Protects individual privacy by ensuring that even if someone gains access to a dataset, they can't identify specific individuals from the data. How It Works in AI: AI systems can analyze trends and patterns from large datasets without exposing personal information about any individual. Federated Learning: Definition: A machine learning technique where an AI model is trained across multiple decentralized devices or servers that hold local data samples, without the data being exchanged or stored centrally. Purpose: It allows models to be trained without moving personal data, keeping it secure and private on users’ devices. How It Works in AI: AI models are trained using the data stored on individual devices, ensuring that the raw data never leaves those devices. Only the model updates (not the data) are sent to a central server. Encryption (Both at Rest and in Transit): Encryption at Rest: Definition: Protecting data that is stored (not being used or transmitted) by converting it into an unreadable format unless accessed with the proper decryption key. Purpose: Ensures that data stored on servers or databases is protected from unauthorized access. Encryption in Transit: Definition: Protecting data that is being transferred between systems (e.g., between a user and a server) by encrypting it while in transit. Purpose: Ensures that data remains secure while it is being transmitted across networks, preventing it from being intercepted or compromised.
  • #29: Establish continuous monitoring frameworks to systematically track and evaluate AI system performance over time. Deploy monitoring tools and techniques such as real-time analytics, anomaly detection algorithms, and performance dashboards to identify and address deviations promptly. Conduct periodic reviews and updates of your AI compliance program to incorporate lessons learned, address emerging regulatory changes, and integrate advancements in AI ethics and governance.
  • #30: https://ico.org.uk/media/2258461/dpia-template-v04-post-comms-review-20180308.pdf https://docs.google.com/document/d/17FQDlOdwVC0lhWnlEGXz0b2ubCOSeFeDoonIboW4iyc/edit?usp=sharing