Privacy and AI #19

Privacy and AI #19

In this edition of Privacy and AI

SUCCESSFUL AI USE CASES IN ORGANIZATIONS

• Successful AI Use Cases in Legal and Compliance

• Successful AI Use Cases for Chief Procurement Officers and other sourcing leaders

AI COMPLIANCE

• Call for inputs - emotion recognition systems in the workplace or education

• Use of automated risk classification to control fraud in grants considered illegal

AI GOVERNANCE

• Building a Foundation for AI Success: A Leader’s Guide

•The role of Boards in AI governance

• Uses of AI by the Dutch Government

• Use of AI in Financial Services in the UK

• AI Management Essentials (UK gov)

• Historical Analogues That Can Inform AI Governance

• Large Language Models explained briefly (3Blue1Brown)



SUCCESSFUL AI USE CASES IN ORGANIZATIONS

Successful AI Use Cases in Legal and Compliance

Many organizations recognize the potential of AI and increasing AI adoption is one of their main focus, but AI does not come from heaven. Enterprise users of AI need to match their needs with a potential AI tool that satisfies that need.

Article content

There are sometimes external factors that do not contribute, such as AI vendors overselling the capabilities of their products and consultants advising on AI applications with a playbook, without seeing examples that have been successfully implemented in the industry.

What people leading AI adoption need, I think, are examples of concrete, real, proven and successful AI use cases implemented by companies. This will help turn AI adoption awareness into practical outcomes. And even if what succeeded in one company may not be useful to another (companies differ in size, resources, skills, timelines, etc), this may serve as inspiration to explore within their departments.

With this in mind, I prepared summaries of successfully implemented AI use cases for business leaders, including company name, AI use case, and where available the outcomes of the implementation.

In this release of Successful AI Use Cases in Organizations, I show how other companies (and one public body) have adopted AI tools in Legal and Compliance.

Article content

• Vodafone: Vertex AI for contract analysis

• Fluna: Automated analysis and drafting of contracts

• Justicia Lab: AI assistant for legal aid for asylum seekers

• IPRally: Patent document search tool

• Canton of Aargau (Switzerland) Anonymization solution at courts

Link to the post here

Download the list here


Successful AI Use Cases in Organizations for Chief Procurement Officers and other sourcing leaders

Article content

• BMW: Revamping procurement operations with genAI

Article content

• The Home Depot: Sidekick app for inventory management

• Workday. AI to accelerate document processing

• Merck: AI for life science sourcing

Link to the post here

Download the list here



AI COMPLIANCE

Call for inputs - emotion recognition systems in the workplace or education

As usual, the Dutch AP taking the lead on AI matters. This time it released a call for comments about the prohibition of the use of emotion recognition systems in the workplace and education.


Article content

This is one of the potential areas where private companies will need to stop using AI systems.

- The guidance highlights that recruitment is included in the definition of "workplace", which was not entirely clear (at least for me)

There are still other questions that would need further clarification

1) Does the prohibition include the incidental recognition of emotions from employees?

These would be cases where the main target of the EMS are customers (eg, train passengers, amusement park visitors), but given that employees of the deployer would also be present alongside the customers, the employees' emotions may be captured as well.

In my personal opinion, cases of incidental collection of emotions should be out of the scope of the prohibition.

Reasoning: The purpose of the system is what defines the prohibition, not the fact that some employees could be subject to the EMS (responsible deployers should not use the data for different purposes, similar to what currently happens with privacy laws, and provide safeguards).

The AI Act says developing 'for this specific purpose' or using 'to infer' emotions in the workplace. If the purpose of emotion recognition is to know the emotions or intentions of customers (eg happiness, sadness, anger, surprise, disgust, embarrassment, excitement, shame, contempt, satisfaction and amusement), the system is not built for the 'specific purpose' of or used to infer emotions of individuals in the workplace

The AI Act also recognizes that the use of EMS for other purposes, different from capturing the emotions or intentions of employees, is allowed - ie, medical or safety reasons. So even in these two exceptions (using EMS for medical or safety reasons), there might be incidental capturing of the emotions or intentions of other employees (for instance, administrative or supporting employees).

I assume this will be subject to debate when more use cases arise.

2) Does it cover persons who are not under regular employment contract but are freelance or contractors? I think they should be covered, given that in this situations 'imbalance of power' (rec 44) may still exist.

3) What is the definition of "education institution"? Does it cover all institutions providing any kind of education (including non-official institutions for instance MOOC providers like Udemy, Coursera) or officially recognised education institutions?

Link here



Use of automated risk classification to control fraud in grants considered illegal

The case

From 2012 to 2023 the Education Executive Agency (DUO in NL) used an algorithm with risk factors (risk profile) to monitor the abuse of non-resident grants.

Article content

The risk factors were:

- Type of education: secondary vocational education resulted in higher risk score than university education

- Distance: a shorter distance between the students home and the parents increased the score

- Age: a lower age of the student resulted in higher score

These risk factors were used to distinguish between groups of students. The higher the factors, the higher the risk code.

Factual consequences of the use of the algorithm: groups with non-EU migration background were in general given a higher risk factor and code. As a result, they were selected more often for a home visit to check for abuse of the non-resident grant than other groups of students with Dutch background

AP decision

- The risk factors were based on “experience and common sense”, but no objective justification for the selection. Making a distinction on the basis of these three criteria without objective justification (eg. Statistics or research) amounted to discrimination

- Any processing that makes a distinction that cannot be objectively justified is discriminatory and therefore unlawful. It is therefore of great importance to prevent discriminatory processing as much as possible by ensuring that the selection rules in an algorithm are based on relevant statistical and validated research.

- Even if the criteria used in an algorithm does not give rise to the assumption of discrimination, the output of an algorithm may nevertheless show an overrepresentation of a group with a specific, sensitive characteristic, which means that the automated processing does make an unlawful distinction.

- Assessments required:

Before processing: DPIA on whether the selection rules or criteria to be used in an algorithm could lead to discriminatory processing.

During processing: Algorithms and their outcomes should also be regularly evaluated or their functioning and outcomes. This way, discriminatory, incorrect or unfair outcomes can be detected in time and the algorithm can be revised.

From the information (automatic translation of the full report) it is unclear to me whether this algorithm would fall under the definition of AI system under the AI Act. Clarifications, in particular from Dutch speaking colleagues, are welcome.

Link here



AI GOVERNANCE

Building a Foundation for AI Success: A Leader’s Guide

This resource helps you inform your own AI strategy, whether you are just beginning to consider AI, are testing and deploying, or are well along the path.

The practices are organized into 5 categories, as shown below


Article content

1. BUSINESS STRATEGY

• Define and prioritize business objectives such as customer experience, productivity, revenue growth, employee experience, and other key goals.

• Determine how you will measure the value of those objectives.

• Identify and prioritize AI use cases that support your goals.

• Build a portfolio management plan to help guide your investments.

2. TECHNOLOGY STRATEGY

• Based on your top-priority use cases, determine whether to buy, modernize, or build applications.

• Assess whether you have the infrastructure for AI applications to access data securely, quickly, and at scale.

• Consider the scalability and performance implications of hosting data and AI applications on premises or in the cloud.

• Ensure your cloud infrastructure is built to run large AI workloads and deliver reliability at scale.

• Evaluate your organization’s Zero Trust security posture.

• Explore how to use AI for improving security, in terms of deploying and protecting organizational assets, developing and maintaining policies and procedures, and monitoring and responding to incidents or emerging threats.

3 AI STRATEGY & EXPERIENCE

• Familiarize yourself with generative AI use cases and how they might support your business needs.

• Develop a systematic process to consider AI for every use case.

• Assess the number of business units and processes, length of time in production, and age of deployments in your organization to reveal patterns that may point to opportunities or blockers.

• Build intelligent apps on your data to improve the intelligence and relevance of model outputs.

• Consider using Microsoft 365 Copilot, or build your own copilot to accelerate learning and time to value.

4. ORGANIZATION & CULTURE

• Define your operating model for AI.

• Secure—or develop a plan to secure—leadership support backed by resources.

• Develop strong relationships with a diverse range of subject-matter experts in the business.

• Strengthen your organization’s ability to manage change.

• Identify and implement the right learning and skill-building paths in place.

• Approach AI as a sustainable capability within your organization and culture.

5 AI GOVERNANCE

• Review and share resources on responsible use of AI to identify the models and approaches that best suit your organization.

• Consider the enablement model that best fits your needs, such as hub-and-spoke, centralized, or distributed.

• Consider the principles of secure AI and how to ensure your data is protected end to end from platform to applications and users.

• Consider the processes, controls, and accountability mechanisms that may be required to govern the use of AI, and how AI may affect data privacy and security policies.

Link here


The role of Boards in AI governance

Deloitte surveyed nearly 500 board members and C-suite executives to understand how involved boards have been in AI governance.

Findings

1. In many organizations, AI is rarely discussed at board level. 45% of boards has not included AI governance in the agenda

Article content

2. Almost half of respondents would like their boards to devoting more time to AI oversight. 50% is not satisfied with the amount of time the Board spend on AI topics

Article content

3. Nearly half of respondents say their organizations need to accelerate progress on AI implementation (and 40% has not started yet)

Article content

4. Nearly 80% of respondents say their Boards have limited to no knowledge or experience with AI

Article content

5. Boards are exploring various avenues to enhance AI fluency. For instance, board members seeking to enhance their AI skills, providing foundational education to Board, bringing external specialists, etc

Article content

6. AI is being incorporated unevenly in organizations’ business and operating plans. 1/3 focuses efforts in certain areas, 1/3 is experimenting and 1/3 has not incorporated

Article content

7. Productivity and efficiency enhancements, customer experience improvements, and developing new products or other innovations are the main strategic goals aligned with current AI adoption

Article content

8. Management at respondents’ organizations say advancements to tech, CX and core operations are top reasons to spend more on AI

Article content

9. Customers and employees are the most important stakeholders to considered

Article content

10. Respondents point to governance and ethical use, policy and strategy development, risk management and implementation as key tenets of AI board oversight

Article content

Actions Boards should consider

1. Put AI on the Board agenda: consider periodicity, strategy and scenario planning, management oversight, risk appetite, scan regulations, measure progress

2. Define governance structure: delineating and assigning AI-related responsibilities

3. Enhance AI literacy at board and management level

Link here



Uses of AI by the Dutch Government

The Dutch Court of Auditors made a survey on how AI systems are used in the Central Government

There are many points to highlight, but one important for those working on AI governance is the difficulty of developing an AI risk management system, which start with a crucial point: how should we classify AI systems from a risk perspective?

Admittedly, the AI Act classification is limited. Lists prohibited uses, high risk AI systems (subject to stringent obligations, in particular for providers), and some transparency obligations for a very limited list of AI use cases.

Yet, these are only a very tiny fraction of the AI systems developed and used by organizations (according to the survey, only around 10-15% of the systems used by the Dutch government are HRIAS). Given that many HRAIS relate to typical governmental functions (law enforcement, border, asylum, justice, etc), private companies will have even lower representation of HRAIS in their AI inventories (except those AI systems related to safety).

This might lead organizations to underestimate the risks arising from AI systems.

However, "AI systems classified as minimal or limited risk can still harbour risks such as privacy violations, weak information security or negative impact on citizens and businesses, e.g. unfair biases. Furthermore, AI systems with minimal or limited risk must comply with statutory and regulatory provisions such as the General Data Protection Regulation (GDPR)" -p31- among many other regulations, in particular for global organizations.

Also, "the uncertainty of the risk classification of AI systems is reflected in the reported AI systems and their risk levels: 3 organisations that use an AI system to compare fingerprints classify it in 3 different ways: minimal, limited and high" -p32-

Fortunately, many agencies referred that the classification is WIP and they expect to reclassify some systems in the future.


Article content

Link here



Use of AI in Financial Services in the UK

The Bank of England and the FCA published a report on the use of AI in financial services in the UK


Article content

Use and adoption

• 75% are using AI and 10% plan to use AI in the next 3 years

• Large UK and Int'l banks use around 40 and 50 use cases, respectively. But the 50% have less than 10 use cases

• Foundation models form 17% of all AI use cases. Legal and HR are the most frequent users

Third-party exposure

• 33% of all AI use cases are third-party implementations. HR, Risk/Compliance, Operations/IT, and Legal are the areas where third-party AI solutions are mostly implemented

• it is expected that third-party exposure will continue to increase as the complexity of models increases and outsourcing costs decrease.

• The top three third-party providers account for 73%, 44%, and 33% of all reported cloud, model, and data providers respectively.

Automated decision-making

• 55% of all AI use cases have some degree of ADM with 24% of those being semi-autonomous

• Only 2% of use cases have fully autonomous decision-making

Materiality

• Low materiality 62% of all AI use cases (common in operations and IT).

Article content

71% of all foundation models use cases are low materiality

• High materiality: 16% (common in general insurance, risk adn compliance, and retail banking)

Understanding of AI systems

• 46% have only ‘partial understanding’ and 34% have ‘complete understanding’ of the AI they use. In general, firms using third-party models often lack of complete understanding but those developing models internally have a higher degree of understanding

• gradient boosting is the most popular model

Benefits and risks of AI

• Benefits: data and analytical insights, AML and combating fraud, and cybersecurity.

• Most benefited areas: operational efficiency, productivity, and cost base.

• Risks: privacy, data quality, security, and bias and data representativeness.

• Expected risks: third-party dependencies, model complexity, and embedded or ‘hidden’ models.

• Cybersecurity is the highest perceived systemic risk

Governance and accountability

• 84% reported having an accountable person for their AI framework.

• Firms use a combination of different governance frameworks, controls and/or processes specific to AI use cases – over half of firms reported having nine or more such governance components.

• While 72% of firms said that their executive leadership were accountable for AI use cases, accountability is often split with most firms reporting three or more accountable persons or bodies.

Link here



AI Management Essentials (UK gov)

The UK Gov launched consultations on the AI Management Essentials (AIME) which is aimed at providing clarity to organisations about the practical measures to govern AI systems

Article content

What is the AIME?

The AIME is a self-assessment tool designed to help businesses establish robust management practices for the development and use of AI systems

What’s for?

The AIME main goal is to provide organisations a tool and process to evaluate the organisational processes implemented to enable the responsible development and use of AI systems

Target users:

- SMEs and start-ups

- individual business divisions, BU, operational departments, etc of large organisations

What’s not for?

The AIME is not intended to be used for the evaluation of individual AI systems (eg as impact assessment tool)

What I like

- it is easy to concrete and easy to use, which facilitates adoption from the target organizations

What could be considered

- conciseness comes at a cost: sometimes it is difficult for companies to prioritize what’s important and what not. Eg I assume that most of the organizations have not mapped their AI systems. Stating that you need to to map “all” AI systems, puts in the same risk hierarchy AI systems that have completely different risk levels (eg anti-spam vs ATS)

- I expect organizations, in particular target orgs, may find challenges distinguishing between “impact assessment” in section 4 and “risk assessment” in section 5

- the section on “data protection” could be improved, including a reference to other principles. Most of the questions relate to security

- I would include questions about third party due diligence. The few questions included relate to the use of data for training. I’d include, and facilitate a template of, questions related to contractual clauses suggested for inclusion

Link here



Historical Analogues That Can Inform AI Governance

The author explains why the need to implement guardrails in new technologies to leverage the benefits while limiting the risks is not new.


Article content

Several technologies went through the same process, and he illustrates how this process unfolds regarding four technologies, and what lessons can be learnt from these processes.

The technologies used as analogy are:

• Nuclear Technology

• The Internet

• Encryption Products

• Genetic Engineering

Link here



Large Language Models explained briefly (3Blue1Brown)

LLMs explained in 8 minutes. Excellent.

3Blue1Brown also produced longer videos explaining more in-depth how LLMs work

Article content
https://www.youtube.com/watch?v=LPZh9BOjkQs&t=3s

EVENTS

IAPP Data Protection Congress 2024

I had the privilege to attend the IAPP DPC 2024.

Article content

The most important part of the event if the possibility to meet and talk with friends and colleagues, many of them for the first time. This is by far the most enriching part of the event.

On the content. AI took a preponderant role. I think that around 30-40% of the sessions had a relationship with AI, directly or indirectly (the number may be skewed and biased because of my personal interest in these sessions)

I attended mostly sessions related to AI. Some of them were great to understand where very large organizations are heading in terms of AI governance.

Three not to miss sessions (no order)

AI Governance Alignment: Dr Philipp Raether (Allianz), Nubiaa Shabaka (Adobe), Wouter-Bas van der Vegt (Randstadt), general measures implemented by any of these companies

- AI governance bodies reporting to C-level on risks

- AI Trust Officer, sitting in a control function (legal-governance) acting as an orchestrator of different stakeholders

- AI Working Groups with tactical role, supporting strategy of Ethics Board

- Emphasis on the cross-functionality

- Insertion of IP indemnification clauses in contracts

- Classification and risk scoring of AI systems (Red, Amber, Green), to determine different tracks and treatment of potential risks, where the proponent of lowest risk AI system ‘self-certify’ risks/measures to HRIAS (red) that should follow a full AI impact assessment

- Getting buy in from management: “speak their language”, showing benefits and KPIs

Responsible AI by Design

Henri Kujala (Vodafone)

- Using AI to check whether AI requirements are met

- Setting strong AI governance controls enable wider AI adoption and scaling

- Need to partner with procurement functions to identify AI usage (AI added to current services)

- Include contractual clauses for AI, and evaluate 3rd and 4th parties (subprocessor)

- The problem of shadow AI

AIA in action: A(I) accountability

Jasmien César (Mastercard)

- Opportunities to leverage from privacy

- Articulate the values and principles, expanding organizational ethical privacy principles

- Translate principles into actionable practices: AI governance policy built on PP

- Mapping: AI inventory

- Impact assessments, expand privacy intake questionnaire

- AI literacy

Looking forward to the DPC2025!




Unsubscription

You can unsubscribe from this newsletter at any time. Follow this link to know how to do it.


ABOUT ME

I'm working as AI Governance Manager at Informa.

Previously I worked as senior privacy and AI governance consultant at White Label Consultancy. I previously worked for other data protection consulting companies.

I'm specialised in the legal and privacy challenges that AI poses to the rights of data subjects and how companies can comply with data protection regulations and use AI systems responsibly. This is also the topic of my PhD thesis.

I have an LL.M. (University of Manchester), and I'm a PhD (Bocconi University, Milano).

I'm the author of “Data Protection Law in Charts. A Visual Guide to the General Data Protection Regulation“ and "Privacy and AI". You can find the books here

Article content
Article content


Abhijit Lahiri

Fractional CFO | CPA, CA | Gold Medallist 🏅 | Passionate about AI Adoption in Finance | Ex-Tata / PepsiCo | Business Mentor | Forensic Accountant | Author of 'The Fractional CFO Playbook'

5mo

Very apt discussion on which I shared my weekly newsletter earlier during the day !! AI Won’t Replace CFOs—But CFOs Who Leverage AI Will Replace Those Who Don’t https://www.linkedin.com/posts/abhijit-cfo_aiforcfos-financetransformation-futureoffinance-activity-7300889461976875011-qO0K?utm_source=share&utm_medium=member_desktop&rcm=ACoAAAIYkwQBHjyP2MuWtht00LQjOtHVIP11IU4

Like
Reply
Luis Carlos Cabareda

|Consultor|Auditor|Online Speaker| Ayudo a las empresas a ser mas rentables y lograr la sostenibilidad, mediante soluciones ISO 9001,14001,45001,27001,37301, 50001, ESG, Modelos de Excelencia, Ciberseguridad

8mo

excelente contenido, gracias por compartilo

Like
Reply

To view or add a comment, sign in

Others also viewed

Explore topics