SlideShare a Scribd company logo
ChatGPT, Generative AI Data, and Security Consideration
ChatGPT, Generative AI Data,
and Security Consideration
Presented By
Frank Xavier Wukovits
Frank Xavier Wukovits
FRANK XAVIER WUKOVITS
Frank Xavier Wukovits is an Associate at Chiesa
Shahinian & Giantomasi PC (CSG Law) and practices
with the firm's Tech, Privacy, Data Innovations and
Corporate & Securities Groups. He advises clients
across the corporate lifecycle, including startups
and Fortune 500 companies, on matters such as
venture capital, mergers and acquisitions, and
corporate governance. Frank has been recognized
as a "Rising Star" in Business/Corporate by New
Jersey Super Lawyers (2023-2024) and was a finalist
for the New Jersey Law Journal's Innovators Award in
2024. He also serves as an adjunct professor at
Seton Hall University's Stillman School of Business
Associate, CSG Law
2
DISCLAIMER
3
ATTORNEY DISCLAIMER: WHILE AI OFFERS OPPORTUNITIES AND BENEFITS, I DO NOT
CONDONE, RECOMMEND, OR ENDORSE YOUR OR YOUR ORGANIZATION’S USE OF AI
WITHOUT YOUR OR YOUR ORGANIZATION’S OWN RISK ASSESSMENT. MY OPINIONS
ARE MY OWN AND DO NOT REPRESENT THOSE OF MY FIRM. AI IS IN A NASCENT
STAGE OF DEVELOPMENT AND AS SUCH MAY "HALLUCINATE" AND CAN OFTEN GIVE
WRONG, BIASED, OR IRRELEVANT CONTENT.
THE INFORMATION PRESENTED IS NOT MEANT TO CONSTITUTE LEGAL ADVICE.
CONSULT YOUR ATTORNEY FOR ADVICE ON A SPECIFIC SITUATION.
THE INFORMATION PRESENTED IS CURRENT AS OF THE DATE OF THE ORIGINAL
RECORDING OF THE PRESENTATION. GIVEN THE DYNAMIC NATURE OF THE TOPIC,
PARTICIPANTS ARE ENCOURAGED TO CHECK THE RELEVANT GOVERNMENT WEBSITES
FOR THE MOST RECENT INFORMATION.
• Artificial Intelligence (AI)
• Machine Learning
• Deep Learning
• Large Language Models (LLMs)
• Generative AI
• Agent
Definitions
4
Benefits and Capabilities
5
Bigger Firms Smaller Firms
Document Review
Automation of Administrative
Tasks
Auditing Research
Information Technology Advisory
Aggregation and Analysis
Aggregation and Analysis
Additional Use Cases
• Gathering of supplemental material data
• Forecasting sales
• Streamlining workflows
• Entering transactions
• Double-checking data accuracy
• Integrating with existing ledger applications
• Benchmarking
6
Regulatory Complexity and Compliance
7
• Technical Enhancements
• Automated Compliance Validation
• Explainable Compliance Mechanisms
Ethical and Responsible AI Development
• Technical Enhancements
• Bias Detection and Mitigation
• Reproducibility
8
Non-Compliance Use Cases
• Have not been approved for use by your organization
• Violate any customer or vendor agreement
• Involve uploading or inputting any content or data of any kind,
including but not limited to, passwords, source code, company
financials, confidential, proprietary, or sensitive information,
belonging to the company or one of its partners, vendors or clients*
• Invade individuals' privacy, such as unauthorized data collection or
surveillance, that could lead to legal repercussions
• Violate applicable anti-discrimination laws or company policies
AI Tools or use AI Tools that:
9
Non-Compliance Use Cases (Continued)
• Produce sexually explicit or pornographic, illegal, offensive,
defamatory, libelous content directly copy or plagiarize existing works
from artists or creators and that would be a violation of copyright laws
• Intentionally mislead or deceive the audience
• Promote or propagate any form of discrimination, stereotypes, or
offensive material that could offend or harm specific groups
• Artificially manipulate data, statistics, or information to present a false
narrative or misleading insights
• Fully automate decision-making, including employment decisions
AI Tools or use AI Tools that:
10
Misinformation and Manipulation
• Social engineering attacks
• Erosion of trust in AI systems
• Potential societal impacts
11
Source: Rob van der Veer, Software Improvement Group, https://www.linkedin.com/posts/robvanderveer_ai-
aisecurity-activity-7274736168255074304-g9yq?utm_source=share&utm_medium=member_desktop
12
Ownership/Confidentiality Issues
• What you input is used to train/finetune that model.
• What you interact with also can give clues to your use.
• You may not own the output.
• You may not be able to license an output.
13
Over Reliance on AI-generated Information
• Security breaches
• Legal issues
• Reputational damage
14
Ethical Concerns
• Bias in AI decision-making
• Fairness and discrimination
• Accountability for AI-generated content
• Transparency
15
Third Party Considerations
16
Representations and Warranties
• Typically, a service provider will represent that it owns or otherwise
has sufficient rights to allow the customer (i.e., your organization) to
own, use, or benefit from the service provider’s deliverable or service.
• However, if such a service provider uses the AI system to produce a
deliverable, the AI may itself produce an infringing, inaccurate, or
vulnerable output when performing its functions.
17
Indemnification
• We need to carefully consider how to allocate liability for the AI's
functionality.
• When an AI system’s generative output results in a liability, it may be
difficult to determine whether the provider of the AI (i.e., the service
provider) or its user (i.e., your organization) caused the event giving
rise to liability.
Such liability needs to be negotiated.
18
Insurance
• Consider the various forms of insurance coverage, including:
• Commercial general liability (CGL) insurance
• Cyber insurance
• Errors and omissions (E&O) coverage.
• Business interruption coverage.
• Directors and officers (D&O) insurance.
• There are many possible ways AI systems can incur liability.
• Connect with broker to determine which coverage, if any, applies to
any given situation, and if not whether they can expand or add
coverage or if coverage is even available.
19
Source: Tabassi, E. (2023), Artificial Intelligence Risk Management Framework (AI RMF 1.0), NIST Trustworthy and Responsible AI, National
Institute of Standards and Technology, Gaithersburg, MD, [online], https://doi.org/10.6028/NIST.AI.100-1,
https://tsapps.nist.gov/publication/get_pdf.cfm?pub_id=936225 (Accessed February 23, 2025)
20
Source: Tabassi, E. (2023), Artificial Intelligence Risk Management Framework (AI RMF 1.0), NIST Trustworthy and Responsible AI, National Institute of
Standards and Technology, Gaithersburg, MD, [online], https://doi.org/10.6028/NIST.AI.100-1,
https://tsapps.nist.gov/publication/get_pdf.cfm?pub_id=936225 (Accessed February 23, 2025)
21
Effective Management
Board Members and Executives
AI Committee
AI Policies
AI Procedures
AI Training
Enforcement
22
Leadership
• Educating (is and is nots)
• AI via internal development, service provider, acquisition?
• Informing of opportunities
• Cost benefit analysis
• Leveraging existing customer data to maximize profits
• Synthesizing data to make informed decisions
• Informing of risks
• What is our organization’s risk threshold?
• Compliance with Applicable Laws (now and imminent)
• Discrimination
• Hallucinations
23
Leadership
• What does the market show?
• What do our partners think?
• How are competitors pursuing these opportunities and risks?
• What about our customers?
• Expectations?
• Notices?
• Consents?
• What are regulators saying?
• EU AI Act
• FTC
• State Laws
24
The European Union Artificial
Intelligence Act (EU AI Act)
• Unacceptable Risk
• High Risk
• Limited Risk
• Minimal Risk
25
Federal Trade Commission
• The FTC is an independent agency of the United States Government
created by the FTC Act, which authorizes the FTC to commence a district
court civil action by its own attorneys
• The FTC enforces Section 5(a) of the FTC Act, 15 U.S.C. § 45(a), which
prohibits unfair or deceptive acts or practices in or affecting commerce.
• Misrepresentations or deceptive failures to disclose a material fact
constitute deceptive or unfair practices prohibited by Section 5(a) of
the FTC Act.
• Acts or practices are unfair under Section 5 of the FTC Act if they cause
or are likely to cause substantial injury to consumers that consumers
cannot reasonably avoid themselves and that are not outweighed by
countervailing benefits to consumers or competition. (15 U.S.C.§ 45 (n))
26
FEDERAL TRADE COMMISSION v. RITE AID
CORPORATION & RITE AID HDQTRS. CORP.
• Deletion and third-party deletion
• Consumer notification
• Investigation and response to consumer complaints
• Clear and conspicuous notice
• Deletion of biometric information
• Implementation of a data security program
• Independent third-party assessments
• Annual certification
Outcome
27
FEDERAL TRADE COMMISSION v. RITE AID
CORPORATION and RITE AID HDQTRS. CORP.
• It is my view that Section 5 of the FTC Act requires companies using
technology to automate important decisions about people’s lives –
decisions that could cause them substantial injury – to take
reasonable measures to identify and prevent foreseeable harms.
Importantly, these protections extend beyond face surveillance.
• Indeed, the harms uncovered in this investigation are part of a
much broader trend of algorithmic unfairness – a trend in which
new technologies amplify old harms.
Statement of Commissioner Alvaro M. Bedoya:
28
FEDERAL TRADE COMMISSION
v AUTOMATORS LLC, et al.,
Samuel Levine, Director of the FTC’s Bureau of Consumer Protection:
The defendants lured consumers into investing millions in online stores supposedly
powered by artificial intelligence and made empty promises that they could coach
consumers into achieving success and profitability… Today’s action holds the defendants
accountable for this scheme by banning them from the coaching business, barring
bogus claims, and requiring redress to defrauded consumers.
• Permanent ban on offering business opportunities or coaching for e-commerce platforms
• Prohibition on deceptive earnings claims
• Prohibition on preventing negative reviews
• Turn over possessions
• A total monetary judgment of $21,765,902.65
Outcome
29
Committee
Key stakeholders
• Technology
• Legal
• Compliance
• Data privacy/Data protection
• Operational/Enterprise Risk
Management
• Information Security
• Marketing
• Human Resources
• DEI
30
Priorities
• Protection of Intellectual Property
• Data Integrity and Reliability
• Addressing Privacy Concerns
• Mitigating Emerging Threats
31
Policies
• AI Workplace Policy
• Acceptable Use Policy
• Information Security Policy
• (Internal) Information Privacy Policy
• (External) Consumer Privacy Policy
32
AI SOPs
• Change management
• Assigning roles and responsibilities
• How do we use approved AI tools?
• How do we do administer training?
• How do we do administer testing?
• How do we get our partners involved?
• How do we respond to complaints?
• Create manuals
33
AI Training/Enforcement
• Presentations
• Participation
• Workshops
• Violations (e.g., unauthorized use or misuse of AI tool)
• Terminations
• Consistent enforcement
• Liability considerations
• Document investigations and employment decisions
34
Sources
• OWASP Top 10 for LLMs - LLM and Gen AI Data Security Best Practices Guide 1.0
• IBM. (2025, February 14). What are large language models (llms)?
https://www.ibm.com/think/topics/large-language-models
• Artificial Intelligence Risk Management Framework (AI RMF 1.0)
• Wendt, D. W. (2024). The cybersecurity trinity: Artificial Intelligence,
automation, and active cyber defense. Apress
• Tabassi, E. (2023), Artificial Intelligence Risk Management Framework (AI RMF
1.0), NIST Trustworthy and Responsible AI, National Institute of Standards and
Technology, Gaithersburg, MD, [online], https://doi.org/10.6028/NIST.AI.100-1,
https://tsapps.nist.gov/publication/get_pdf.cfm?pub_id=936225 (Accessed
February 23, 2025)
35
36
QUESTIONS?
Contact Information
Call or email and I'd be happy to speak with you more in-depth or answer any questions.
Frank Xavier Wukovits
csglaw.com
(973) 530-2388
fwukovits@csglaw.com
Frank Xavier
Wukovits
37

More Related Content

PPTX
it-Condust-an-AI-Privacy-Risk-Assessment-Phases-1-3.pptx
PDF
TrustArc Webinar - AI Governance: Managing AI Risk
PDF
Implementing and Auditing GDPR Series (8 of 10)
PDF
Get Ready for Syncsort's New Best-of-Breed Security Solution
PPTX
Compliance in AI: Policies and Best Practices
PPTX
Jeff Saviano 2411-10938-CS IMA Keynote Presentation
PDF
“What Happens When Your Speed of AI Innovation Exceeds Your Ability to See th...
PDF
5 Steps to Securing Your Company's Crown Jewels
it-Condust-an-AI-Privacy-Risk-Assessment-Phases-1-3.pptx
TrustArc Webinar - AI Governance: Managing AI Risk
Implementing and Auditing GDPR Series (8 of 10)
Get Ready for Syncsort's New Best-of-Breed Security Solution
Compliance in AI: Policies and Best Practices
Jeff Saviano 2411-10938-CS IMA Keynote Presentation
“What Happens When Your Speed of AI Innovation Exceeds Your Ability to See th...
5 Steps to Securing Your Company's Crown Jewels

Similar to ChatGPT, Generative AI Data Security Considerations (20)

PPT
Securing Your "Crown Jewels": Do You Have What it Takes?
PDF
Cybersecurity (and Privacy) Issues - Legal and Compliance Issues Everyone in ...
PDF
Ethical AI: Why It’s No Longer an Option for Businesses
PDF
TrustArc Webinar - Elevate Your Business: Unpack the Power of Privacy Certifi...
PPTX
A Look at Cyber Insurance -- A Corporate Perspective
PPTX
Building Trust in Generative Artificial Intelligence
PPTX
New Ohio Cybersecurity Law Requirements
PDF
Ivanti Webinar - How to Win Budget and Influence Non-InfoSec Stakeholders
PDF
Digital Ethical Risk Assessment
PPTX
CYBER SECURITY FOR LAW FIRMS
PDF
The Legal Case for Cybersecurity
PPTX
HXR 2016: Free the Data Access & Integration -Jonathan Hare, WebShield
PPTX
GTAG Fraud prevention Slide Presentation.pptx
PDF
Comprehensive SOC 1 and SOC 2 Reporting for Enhanced Compliance and Security
PDF
What Are the Core Pillars of Responsible AI and Why Do They Matter_.pdf
PDF
Whitepaper - Application Delivery in PCI DSS Compliant Environments
PDF
Cybersecurity and Privacy for In-House Counsel: How the New Regulations and G...
PPTX
Third Party Risk Management
PDF
CII: Addressing Gender Bias in Artificial Intelligence
PPTX
Identity Management: Front and Center for Healthcare Providers
Securing Your "Crown Jewels": Do You Have What it Takes?
Cybersecurity (and Privacy) Issues - Legal and Compliance Issues Everyone in ...
Ethical AI: Why It’s No Longer an Option for Businesses
TrustArc Webinar - Elevate Your Business: Unpack the Power of Privacy Certifi...
A Look at Cyber Insurance -- A Corporate Perspective
Building Trust in Generative Artificial Intelligence
New Ohio Cybersecurity Law Requirements
Ivanti Webinar - How to Win Budget and Influence Non-InfoSec Stakeholders
Digital Ethical Risk Assessment
CYBER SECURITY FOR LAW FIRMS
The Legal Case for Cybersecurity
HXR 2016: Free the Data Access & Integration -Jonathan Hare, WebShield
GTAG Fraud prevention Slide Presentation.pptx
Comprehensive SOC 1 and SOC 2 Reporting for Enhanced Compliance and Security
What Are the Core Pillars of Responsible AI and Why Do They Matter_.pdf
Whitepaper - Application Delivery in PCI DSS Compliant Environments
Cybersecurity and Privacy for In-House Counsel: How the New Regulations and G...
Third Party Risk Management
CII: Addressing Gender Bias in Artificial Intelligence
Identity Management: Front and Center for Healthcare Providers
Ad

More from ssuser336b99 (20)

PDF
Managing Poorly Behaved Opposing Counsel
PDF
Legal Advocacy for Public School Students Facing Disciplinary Action
PDF
Companion Animal Law | Sprout Education CLE
PPTX
A Framework for Efficient Estate and Trust Administration
PPTX
The Best Laid Estate Plans Often Go Awry
PPTX
Stop Lawyersplaining: Effective Estate Planning Communication
PPTX
Ethical Conflicts in Estate Administration and Planning
PDF
Estate Planning for Special Needs | SproutEd CLE
PDF
Avoiding Intellectual Property Traps in Contracts
PDF
Anti-SLAPP in the Media Industry | SproutEd CLE
PDF
A Legal Guide to Blockchain and NFTs | SproutEd CLE
PDF
The Five Most Common Obstacles to Law Firm Growth
PDF
Motion Practice Strategy | Sprout Education CLE
PDF
Medicaid, Long-Term Care and Alternative Funding
PDF
Understanding Attorney-Client Privilege and New Technology
PDF
Veterans Benefits Boot Camp | SproutEd CLE
PDF
Space Law, AI, and Cybersecurity | SproutEd
PDF
California Reporting Obligations: Navigating New Rule 8.3
PDF
Lawyers in Distress | Sprout Education CLE
PDF
Mortgage Law for Non-Mortgage Lawyers | SproutEd CLE
Managing Poorly Behaved Opposing Counsel
Legal Advocacy for Public School Students Facing Disciplinary Action
Companion Animal Law | Sprout Education CLE
A Framework for Efficient Estate and Trust Administration
The Best Laid Estate Plans Often Go Awry
Stop Lawyersplaining: Effective Estate Planning Communication
Ethical Conflicts in Estate Administration and Planning
Estate Planning for Special Needs | SproutEd CLE
Avoiding Intellectual Property Traps in Contracts
Anti-SLAPP in the Media Industry | SproutEd CLE
A Legal Guide to Blockchain and NFTs | SproutEd CLE
The Five Most Common Obstacles to Law Firm Growth
Motion Practice Strategy | Sprout Education CLE
Medicaid, Long-Term Care and Alternative Funding
Understanding Attorney-Client Privilege and New Technology
Veterans Benefits Boot Camp | SproutEd CLE
Space Law, AI, and Cybersecurity | SproutEd
California Reporting Obligations: Navigating New Rule 8.3
Lawyers in Distress | Sprout Education CLE
Mortgage Law for Non-Mortgage Lawyers | SproutEd CLE
Ad

Recently uploaded (20)

PDF
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
PDF
MIND Revenue Release Quarter 2 2025 Press Release
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
DOCX
The AUB Centre for AI in Media Proposal.docx
PDF
Approach and Philosophy of On baking technology
PPTX
sap open course for s4hana steps from ECC to s4
PDF
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
PPTX
A Presentation on Artificial Intelligence
PDF
The Rise and Fall of 3GPP – Time for a Sabbatical?
PDF
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
PDF
Encapsulation theory and applications.pdf
PPTX
MYSQL Presentation for SQL database connectivity
PPTX
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
PDF
Electronic commerce courselecture one. Pdf
PDF
Agricultural_Statistics_at_a_Glance_2022_0.pdf
PDF
Per capita expenditure prediction using model stacking based on satellite ima...
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
PDF
Chapter 3 Spatial Domain Image Processing.pdf
PDF
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
MIND Revenue Release Quarter 2 2025 Press Release
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
The AUB Centre for AI in Media Proposal.docx
Approach and Philosophy of On baking technology
sap open course for s4hana steps from ECC to s4
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
A Presentation on Artificial Intelligence
The Rise and Fall of 3GPP – Time for a Sabbatical?
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
Encapsulation theory and applications.pdf
MYSQL Presentation for SQL database connectivity
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
Electronic commerce courselecture one. Pdf
Agricultural_Statistics_at_a_Glance_2022_0.pdf
Per capita expenditure prediction using model stacking based on satellite ima...
Mobile App Security Testing_ A Comprehensive Guide.pdf
Chapter 3 Spatial Domain Image Processing.pdf
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
Diabetes mellitus diagnosis method based random forest with bat algorithm

ChatGPT, Generative AI Data Security Considerations

  • 1. ChatGPT, Generative AI Data, and Security Consideration ChatGPT, Generative AI Data, and Security Consideration Presented By Frank Xavier Wukovits Frank Xavier Wukovits
  • 2. FRANK XAVIER WUKOVITS Frank Xavier Wukovits is an Associate at Chiesa Shahinian & Giantomasi PC (CSG Law) and practices with the firm's Tech, Privacy, Data Innovations and Corporate & Securities Groups. He advises clients across the corporate lifecycle, including startups and Fortune 500 companies, on matters such as venture capital, mergers and acquisitions, and corporate governance. Frank has been recognized as a "Rising Star" in Business/Corporate by New Jersey Super Lawyers (2023-2024) and was a finalist for the New Jersey Law Journal's Innovators Award in 2024. He also serves as an adjunct professor at Seton Hall University's Stillman School of Business Associate, CSG Law 2
  • 3. DISCLAIMER 3 ATTORNEY DISCLAIMER: WHILE AI OFFERS OPPORTUNITIES AND BENEFITS, I DO NOT CONDONE, RECOMMEND, OR ENDORSE YOUR OR YOUR ORGANIZATION’S USE OF AI WITHOUT YOUR OR YOUR ORGANIZATION’S OWN RISK ASSESSMENT. MY OPINIONS ARE MY OWN AND DO NOT REPRESENT THOSE OF MY FIRM. AI IS IN A NASCENT STAGE OF DEVELOPMENT AND AS SUCH MAY "HALLUCINATE" AND CAN OFTEN GIVE WRONG, BIASED, OR IRRELEVANT CONTENT. THE INFORMATION PRESENTED IS NOT MEANT TO CONSTITUTE LEGAL ADVICE. CONSULT YOUR ATTORNEY FOR ADVICE ON A SPECIFIC SITUATION. THE INFORMATION PRESENTED IS CURRENT AS OF THE DATE OF THE ORIGINAL RECORDING OF THE PRESENTATION. GIVEN THE DYNAMIC NATURE OF THE TOPIC, PARTICIPANTS ARE ENCOURAGED TO CHECK THE RELEVANT GOVERNMENT WEBSITES FOR THE MOST RECENT INFORMATION.
  • 4. • Artificial Intelligence (AI) • Machine Learning • Deep Learning • Large Language Models (LLMs) • Generative AI • Agent Definitions 4
  • 5. Benefits and Capabilities 5 Bigger Firms Smaller Firms Document Review Automation of Administrative Tasks Auditing Research Information Technology Advisory Aggregation and Analysis Aggregation and Analysis
  • 6. Additional Use Cases • Gathering of supplemental material data • Forecasting sales • Streamlining workflows • Entering transactions • Double-checking data accuracy • Integrating with existing ledger applications • Benchmarking 6
  • 7. Regulatory Complexity and Compliance 7 • Technical Enhancements • Automated Compliance Validation • Explainable Compliance Mechanisms
  • 8. Ethical and Responsible AI Development • Technical Enhancements • Bias Detection and Mitigation • Reproducibility 8
  • 9. Non-Compliance Use Cases • Have not been approved for use by your organization • Violate any customer or vendor agreement • Involve uploading or inputting any content or data of any kind, including but not limited to, passwords, source code, company financials, confidential, proprietary, or sensitive information, belonging to the company or one of its partners, vendors or clients* • Invade individuals' privacy, such as unauthorized data collection or surveillance, that could lead to legal repercussions • Violate applicable anti-discrimination laws or company policies AI Tools or use AI Tools that: 9
  • 10. Non-Compliance Use Cases (Continued) • Produce sexually explicit or pornographic, illegal, offensive, defamatory, libelous content directly copy or plagiarize existing works from artists or creators and that would be a violation of copyright laws • Intentionally mislead or deceive the audience • Promote or propagate any form of discrimination, stereotypes, or offensive material that could offend or harm specific groups • Artificially manipulate data, statistics, or information to present a false narrative or misleading insights • Fully automate decision-making, including employment decisions AI Tools or use AI Tools that: 10
  • 11. Misinformation and Manipulation • Social engineering attacks • Erosion of trust in AI systems • Potential societal impacts 11
  • 12. Source: Rob van der Veer, Software Improvement Group, https://www.linkedin.com/posts/robvanderveer_ai- aisecurity-activity-7274736168255074304-g9yq?utm_source=share&utm_medium=member_desktop 12
  • 13. Ownership/Confidentiality Issues • What you input is used to train/finetune that model. • What you interact with also can give clues to your use. • You may not own the output. • You may not be able to license an output. 13
  • 14. Over Reliance on AI-generated Information • Security breaches • Legal issues • Reputational damage 14
  • 15. Ethical Concerns • Bias in AI decision-making • Fairness and discrimination • Accountability for AI-generated content • Transparency 15
  • 17. Representations and Warranties • Typically, a service provider will represent that it owns or otherwise has sufficient rights to allow the customer (i.e., your organization) to own, use, or benefit from the service provider’s deliverable or service. • However, if such a service provider uses the AI system to produce a deliverable, the AI may itself produce an infringing, inaccurate, or vulnerable output when performing its functions. 17
  • 18. Indemnification • We need to carefully consider how to allocate liability for the AI's functionality. • When an AI system’s generative output results in a liability, it may be difficult to determine whether the provider of the AI (i.e., the service provider) or its user (i.e., your organization) caused the event giving rise to liability. Such liability needs to be negotiated. 18
  • 19. Insurance • Consider the various forms of insurance coverage, including: • Commercial general liability (CGL) insurance • Cyber insurance • Errors and omissions (E&O) coverage. • Business interruption coverage. • Directors and officers (D&O) insurance. • There are many possible ways AI systems can incur liability. • Connect with broker to determine which coverage, if any, applies to any given situation, and if not whether they can expand or add coverage or if coverage is even available. 19
  • 20. Source: Tabassi, E. (2023), Artificial Intelligence Risk Management Framework (AI RMF 1.0), NIST Trustworthy and Responsible AI, National Institute of Standards and Technology, Gaithersburg, MD, [online], https://doi.org/10.6028/NIST.AI.100-1, https://tsapps.nist.gov/publication/get_pdf.cfm?pub_id=936225 (Accessed February 23, 2025) 20
  • 21. Source: Tabassi, E. (2023), Artificial Intelligence Risk Management Framework (AI RMF 1.0), NIST Trustworthy and Responsible AI, National Institute of Standards and Technology, Gaithersburg, MD, [online], https://doi.org/10.6028/NIST.AI.100-1, https://tsapps.nist.gov/publication/get_pdf.cfm?pub_id=936225 (Accessed February 23, 2025) 21
  • 22. Effective Management Board Members and Executives AI Committee AI Policies AI Procedures AI Training Enforcement 22
  • 23. Leadership • Educating (is and is nots) • AI via internal development, service provider, acquisition? • Informing of opportunities • Cost benefit analysis • Leveraging existing customer data to maximize profits • Synthesizing data to make informed decisions • Informing of risks • What is our organization’s risk threshold? • Compliance with Applicable Laws (now and imminent) • Discrimination • Hallucinations 23
  • 24. Leadership • What does the market show? • What do our partners think? • How are competitors pursuing these opportunities and risks? • What about our customers? • Expectations? • Notices? • Consents? • What are regulators saying? • EU AI Act • FTC • State Laws 24
  • 25. The European Union Artificial Intelligence Act (EU AI Act) • Unacceptable Risk • High Risk • Limited Risk • Minimal Risk 25
  • 26. Federal Trade Commission • The FTC is an independent agency of the United States Government created by the FTC Act, which authorizes the FTC to commence a district court civil action by its own attorneys • The FTC enforces Section 5(a) of the FTC Act, 15 U.S.C. § 45(a), which prohibits unfair or deceptive acts or practices in or affecting commerce. • Misrepresentations or deceptive failures to disclose a material fact constitute deceptive or unfair practices prohibited by Section 5(a) of the FTC Act. • Acts or practices are unfair under Section 5 of the FTC Act if they cause or are likely to cause substantial injury to consumers that consumers cannot reasonably avoid themselves and that are not outweighed by countervailing benefits to consumers or competition. (15 U.S.C.§ 45 (n)) 26
  • 27. FEDERAL TRADE COMMISSION v. RITE AID CORPORATION & RITE AID HDQTRS. CORP. • Deletion and third-party deletion • Consumer notification • Investigation and response to consumer complaints • Clear and conspicuous notice • Deletion of biometric information • Implementation of a data security program • Independent third-party assessments • Annual certification Outcome 27
  • 28. FEDERAL TRADE COMMISSION v. RITE AID CORPORATION and RITE AID HDQTRS. CORP. • It is my view that Section 5 of the FTC Act requires companies using technology to automate important decisions about people’s lives – decisions that could cause them substantial injury – to take reasonable measures to identify and prevent foreseeable harms. Importantly, these protections extend beyond face surveillance. • Indeed, the harms uncovered in this investigation are part of a much broader trend of algorithmic unfairness – a trend in which new technologies amplify old harms. Statement of Commissioner Alvaro M. Bedoya: 28
  • 29. FEDERAL TRADE COMMISSION v AUTOMATORS LLC, et al., Samuel Levine, Director of the FTC’s Bureau of Consumer Protection: The defendants lured consumers into investing millions in online stores supposedly powered by artificial intelligence and made empty promises that they could coach consumers into achieving success and profitability… Today’s action holds the defendants accountable for this scheme by banning them from the coaching business, barring bogus claims, and requiring redress to defrauded consumers. • Permanent ban on offering business opportunities or coaching for e-commerce platforms • Prohibition on deceptive earnings claims • Prohibition on preventing negative reviews • Turn over possessions • A total monetary judgment of $21,765,902.65 Outcome 29
  • 30. Committee Key stakeholders • Technology • Legal • Compliance • Data privacy/Data protection • Operational/Enterprise Risk Management • Information Security • Marketing • Human Resources • DEI 30
  • 31. Priorities • Protection of Intellectual Property • Data Integrity and Reliability • Addressing Privacy Concerns • Mitigating Emerging Threats 31
  • 32. Policies • AI Workplace Policy • Acceptable Use Policy • Information Security Policy • (Internal) Information Privacy Policy • (External) Consumer Privacy Policy 32
  • 33. AI SOPs • Change management • Assigning roles and responsibilities • How do we use approved AI tools? • How do we do administer training? • How do we do administer testing? • How do we get our partners involved? • How do we respond to complaints? • Create manuals 33
  • 34. AI Training/Enforcement • Presentations • Participation • Workshops • Violations (e.g., unauthorized use or misuse of AI tool) • Terminations • Consistent enforcement • Liability considerations • Document investigations and employment decisions 34
  • 35. Sources • OWASP Top 10 for LLMs - LLM and Gen AI Data Security Best Practices Guide 1.0 • IBM. (2025, February 14). What are large language models (llms)? https://www.ibm.com/think/topics/large-language-models • Artificial Intelligence Risk Management Framework (AI RMF 1.0) • Wendt, D. W. (2024). The cybersecurity trinity: Artificial Intelligence, automation, and active cyber defense. Apress • Tabassi, E. (2023), Artificial Intelligence Risk Management Framework (AI RMF 1.0), NIST Trustworthy and Responsible AI, National Institute of Standards and Technology, Gaithersburg, MD, [online], https://doi.org/10.6028/NIST.AI.100-1, https://tsapps.nist.gov/publication/get_pdf.cfm?pub_id=936225 (Accessed February 23, 2025) 35
  • 37. Contact Information Call or email and I'd be happy to speak with you more in-depth or answer any questions. Frank Xavier Wukovits csglaw.com (973) 530-2388 fwukovits@csglaw.com Frank Xavier Wukovits 37