SlideShare a Scribd company logo
Filip Maertens
Founder, VP Business Development at Faction XYZ
Data Innovation Summit
March, 30 2017
#DIS2017
Can A.I. help us build a better
world
“Terminator vs. Idiocracy?”, Or
our indulgence to spot the
threats when dealing with A.I.
When discussing the threats of A.I. we naively portray the advent of an
AGI as the precursor to the doom of our human race. Economically, we
envision mass unemployment, a new divide between have’s and have
not’s. We worry how Google invades our homes, yet our complacency
prevents us from acting on it. However these threats are all valid, A.I.
researchers have a moral duty.
1
New tech and approaches
HACKERS & CRIMINALS
Weaponizing A.I. as hacking tools, or vulnerability detection tools
can equally be used for uncontrolled mass surveillance, lower the
cost of hacking, and continuously detect zero day exploits.
ROGUE GOVERNMENTS
Profiling millions of online users and targeting them with
personalized content may influence millions of users, spread hate,
and overthrow governments. Influence systems are a weaponized
version of A.I.
Endangering democracies
2
We can create a better world!
A.I. researchers should be driven by curiousness, ethics and morality. Not by law, gain or politics.
3
Adhere to a strong moral
code of conduct.
In a field of research where we have the ability to impact billions of
people we have the duty to adhere to a strong moral code of conduct. We
put morals above law. The Ethics Advisory Panel (EAP) could be a good
beginning, but needs further adoption worldwide, much like the Socrates
process for medical doctors. How will machines know what we value, if
we don’t know ourselves?
4
Reducing bias from training
data
While a great focus is put on the results of new learning algorithms,
computationally more efficient techniques and more, we grossly overlook
the layman’s principles of machine learning in general: shit in, shit out. We
need to be vigilant to bias in training data, in order to prevent racist,
sexist,
5
Embrace privacy & data
protection as an
opportunity to do good
While admittedly the GDPR will continue to cause a lot of concern and
friction within the A.I. community, we cannot dismiss it as political
invention that aims to bound our profession. Consider it as a security
layer around our expertise. While we have our duty to challenge the law,
we also need to adhere to best practices, such as data minimization, and
the right to explain.
6
Embedding morality into
algorithms
Just like OpenAI has committed to program morality into its algorithms,
morality systems should be an intrinsic point of discussion in any A.I.
debate. When dealing with autonomous decision making systems fueled
by A.I. considering morality systems or security as an afterthought can be
a trigger for another A.I. winter.
7
Finally. We need to
cultivate ourselves.
If algorithms learn from humans, then we’re about to give birth to the
first tax avoiding, chain smoking, wife beating, cussing badass chatbot
we’ve ever seen. Oh, wait… As humans, we live in an age where
everything is recorded and in the open, and everything can be used as a
training set. Yet we still behave as brutes in our online lives. So, did we
really expect anything else from Tay?
8
9
But the opportunities to do good are everywhere!
#DIS2017 - How can A.I. Help us build a better world

More Related Content

PDF
Security in the age of Artificial Intelligence
PPTX
Artificial intelligence in cyber defense
PPTX
BE-EEE-8th sem-Presentation Artificial intelligence in security managenent
PDF
Applications of artificial intelligence techniques to combating cyber crimes ...
DOCX
seminar Report-BE-EEE-8th sem-Artificial intelligence in security managenent
PDF
AI and Cybersecurity - Food for Thought
PDF
AI and Machine Learning In Cybersecurity | A Saviour or Enemy?
PPTX
Cyber security and AI
Security in the age of Artificial Intelligence
Artificial intelligence in cyber defense
BE-EEE-8th sem-Presentation Artificial intelligence in security managenent
Applications of artificial intelligence techniques to combating cyber crimes ...
seminar Report-BE-EEE-8th sem-Artificial intelligence in security managenent
AI and Cybersecurity - Food for Thought
AI and Machine Learning In Cybersecurity | A Saviour or Enemy?
Cyber security and AI

What's hot (20)

PPTX
AI In Cybersecurity – Challenges and Solutions
PPTX
Every thing about Artificial Intelligence
PPTX
AI and ML in Cybersecurity
PPTX
Product security by Blockchain, AI and Security Certs
PDF
Knowledge based systems -- introduction
PDF
Web 2.0 Collective Intelligence - How to use collective intelligence techniqu...
PPTX
Artificial Intelligence
DOCX
What is Deep Learning?
PDF
Cyber Intelligence In the Cognitive Era
PPTX
How is ai important to the future of cyber security
PPTX
From Narrow AI to Artificial General Intelligence (AGI)
PPTX
Artificial Intelligence and Expert System
PDF
Artificial intellegence
PPT
Artificial intelligence
PDF
Artificial Intelligence Techniques for Cyber Security
PDF
Artificial Intelligence Research Topics for PhD Manuscripts 2021 - Phdassistance
PPTX
Scrtz & lyz incorrect f1
PPT
Intelligent systems
PDF
Cognitive Computing - A Primer
PPTX
Introduction to AI
AI In Cybersecurity – Challenges and Solutions
Every thing about Artificial Intelligence
AI and ML in Cybersecurity
Product security by Blockchain, AI and Security Certs
Knowledge based systems -- introduction
Web 2.0 Collective Intelligence - How to use collective intelligence techniqu...
Artificial Intelligence
What is Deep Learning?
Cyber Intelligence In the Cognitive Era
How is ai important to the future of cyber security
From Narrow AI to Artificial General Intelligence (AGI)
Artificial Intelligence and Expert System
Artificial intellegence
Artificial intelligence
Artificial Intelligence Techniques for Cyber Security
Artificial Intelligence Research Topics for PhD Manuscripts 2021 - Phdassistance
Scrtz & lyz incorrect f1
Intelligent systems
Cognitive Computing - A Primer
Introduction to AI
Ad

Viewers also liked (7)

PDF
Dialnet las actitudesdelosdocenteshacialaformacionentecnolo-498346
PPTX
Hardware Design Exercises
PDF
Problemas resueltos-cadenas-de-markov
PDF
Cartilla modelo accidentes_transito
PPTX
Tarea del seminario 3
DOCX
Informe tecnico plancha
PPTX
Media Evaluation Q2
Dialnet las actitudesdelosdocenteshacialaformacionentecnolo-498346
Hardware Design Exercises
Problemas resueltos-cadenas-de-markov
Cartilla modelo accidentes_transito
Tarea del seminario 3
Informe tecnico plancha
Media Evaluation Q2
Ad

Similar to #DIS2017 - How can A.I. Help us build a better world (20)

PDF
Responsible-A.I-and-Privacy-Report.pdf
PPTX
3 Steps To Tackle The Problem Of Bias In Artificial Intelligence
PDF
The-Promise-and-Peril-of-Artificial-General-Intelligence.pdf
PDF
The Danger and Risk of AI
PDF
Confronting the risks of artificial Intelligence
PDF
Exploring AI Ethics_ Challenges, Solutions, and Significance
PDF
A Strange Thing is Happening in the World of Artificial Intelligence (AI)
PDF
Untitled presentation (1).pdf in Understanding the impact of artificial intel...
PDF
Decentralized AI - An Innovative approach to AI Challenges
PDF
Decentralized AI - An Innovative approach to AI Challenges
PPTX
Ethical Dimensions of Artificial Intelligence (AI) by Rinshad Choorappara
PDF
THE SOCIAL IMPACTS OF AI AND HOW TO MITIGATE ITS HARMS
PDF
AI Ethics A Comprehensive Guide for the General Public.pdf
PDF
AI and Covert Influence Operations: Latest Trends
PPTX
THE DARK SIDE OF ARTIFICIAL INTILIGENCE (AI.pptx
PDF
The Future of AI_ What the Next 10 Years Could Mean for Every Industry.pdf
DOCX
The Future of AI: What’s Next for Us Ahead?
DOCX
Info about IA ……………………….:……………….………………………
PDF
The Ethical Journey of Artificial Intelligence- Navigating Privacy, Bias, and...
PDF
Why AI Needs Us More Than We Need AI.pdf
Responsible-A.I-and-Privacy-Report.pdf
3 Steps To Tackle The Problem Of Bias In Artificial Intelligence
The-Promise-and-Peril-of-Artificial-General-Intelligence.pdf
The Danger and Risk of AI
Confronting the risks of artificial Intelligence
Exploring AI Ethics_ Challenges, Solutions, and Significance
A Strange Thing is Happening in the World of Artificial Intelligence (AI)
Untitled presentation (1).pdf in Understanding the impact of artificial intel...
Decentralized AI - An Innovative approach to AI Challenges
Decentralized AI - An Innovative approach to AI Challenges
Ethical Dimensions of Artificial Intelligence (AI) by Rinshad Choorappara
THE SOCIAL IMPACTS OF AI AND HOW TO MITIGATE ITS HARMS
AI Ethics A Comprehensive Guide for the General Public.pdf
AI and Covert Influence Operations: Latest Trends
THE DARK SIDE OF ARTIFICIAL INTILIGENCE (AI.pptx
The Future of AI_ What the Next 10 Years Could Mean for Every Industry.pdf
The Future of AI: What’s Next for Us Ahead?
Info about IA ……………………….:……………….………………………
The Ethical Journey of Artificial Intelligence- Navigating Privacy, Bias, and...
Why AI Needs Us More Than We Need AI.pdf

Recently uploaded (20)

PPTX
oil_refinery_comprehensive_20250804084928 (1).pptx
PPTX
Acceptance and paychological effects of mandatory extra coach I classes.pptx
PDF
Foundation of Data Science unit number two notes
PPTX
Database Infoormation System (DBIS).pptx
PDF
168300704-gasification-ppt.pdfhghhhsjsjhsuxush
PPTX
Data_Analytics_and_PowerBI_Presentation.pptx
PPTX
Business Ppt On Nestle.pptx huunnnhhgfvu
PDF
Recruitment and Placement PPT.pdfbjfibjdfbjfobj
PPTX
STUDY DESIGN details- Lt Col Maksud (21).pptx
PPTX
IB Computer Science - Internal Assessment.pptx
PPTX
Introduction to Basics of Ethical Hacking and Penetration Testing -Unit No. 1...
PDF
.pdf is not working space design for the following data for the following dat...
PPT
Quality review (1)_presentation of this 21
PPTX
DISORDERS OF THE LIVER, GALLBLADDER AND PANCREASE (1).pptx
PPTX
ALIMENTARY AND BILIARY CONDITIONS 3-1.pptx
PPTX
Microsoft-Fabric-Unifying-Analytics-for-the-Modern-Enterprise Solution.pptx
PPTX
The THESIS FINAL-DEFENSE-PRESENTATION.pptx
PPTX
MODULE 8 - DISASTER risk PREPAREDNESS.pptx
PDF
TRAFFIC-MANAGEMENT-AND-ACCIDENT-INVESTIGATION-WITH-DRIVING-PDF-FILE.pdf
PPTX
Introduction to machine learning and Linear Models
oil_refinery_comprehensive_20250804084928 (1).pptx
Acceptance and paychological effects of mandatory extra coach I classes.pptx
Foundation of Data Science unit number two notes
Database Infoormation System (DBIS).pptx
168300704-gasification-ppt.pdfhghhhsjsjhsuxush
Data_Analytics_and_PowerBI_Presentation.pptx
Business Ppt On Nestle.pptx huunnnhhgfvu
Recruitment and Placement PPT.pdfbjfibjdfbjfobj
STUDY DESIGN details- Lt Col Maksud (21).pptx
IB Computer Science - Internal Assessment.pptx
Introduction to Basics of Ethical Hacking and Penetration Testing -Unit No. 1...
.pdf is not working space design for the following data for the following dat...
Quality review (1)_presentation of this 21
DISORDERS OF THE LIVER, GALLBLADDER AND PANCREASE (1).pptx
ALIMENTARY AND BILIARY CONDITIONS 3-1.pptx
Microsoft-Fabric-Unifying-Analytics-for-the-Modern-Enterprise Solution.pptx
The THESIS FINAL-DEFENSE-PRESENTATION.pptx
MODULE 8 - DISASTER risk PREPAREDNESS.pptx
TRAFFIC-MANAGEMENT-AND-ACCIDENT-INVESTIGATION-WITH-DRIVING-PDF-FILE.pdf
Introduction to machine learning and Linear Models

#DIS2017 - How can A.I. Help us build a better world

  • 1. Filip Maertens Founder, VP Business Development at Faction XYZ Data Innovation Summit March, 30 2017 #DIS2017 Can A.I. help us build a better world
  • 2. “Terminator vs. Idiocracy?”, Or our indulgence to spot the threats when dealing with A.I. When discussing the threats of A.I. we naively portray the advent of an AGI as the precursor to the doom of our human race. Economically, we envision mass unemployment, a new divide between have’s and have not’s. We worry how Google invades our homes, yet our complacency prevents us from acting on it. However these threats are all valid, A.I. researchers have a moral duty. 1
  • 3. New tech and approaches HACKERS & CRIMINALS Weaponizing A.I. as hacking tools, or vulnerability detection tools can equally be used for uncontrolled mass surveillance, lower the cost of hacking, and continuously detect zero day exploits. ROGUE GOVERNMENTS Profiling millions of online users and targeting them with personalized content may influence millions of users, spread hate, and overthrow governments. Influence systems are a weaponized version of A.I. Endangering democracies 2
  • 4. We can create a better world! A.I. researchers should be driven by curiousness, ethics and morality. Not by law, gain or politics. 3
  • 5. Adhere to a strong moral code of conduct. In a field of research where we have the ability to impact billions of people we have the duty to adhere to a strong moral code of conduct. We put morals above law. The Ethics Advisory Panel (EAP) could be a good beginning, but needs further adoption worldwide, much like the Socrates process for medical doctors. How will machines know what we value, if we don’t know ourselves? 4
  • 6. Reducing bias from training data While a great focus is put on the results of new learning algorithms, computationally more efficient techniques and more, we grossly overlook the layman’s principles of machine learning in general: shit in, shit out. We need to be vigilant to bias in training data, in order to prevent racist, sexist, 5
  • 7. Embrace privacy & data protection as an opportunity to do good While admittedly the GDPR will continue to cause a lot of concern and friction within the A.I. community, we cannot dismiss it as political invention that aims to bound our profession. Consider it as a security layer around our expertise. While we have our duty to challenge the law, we also need to adhere to best practices, such as data minimization, and the right to explain. 6
  • 8. Embedding morality into algorithms Just like OpenAI has committed to program morality into its algorithms, morality systems should be an intrinsic point of discussion in any A.I. debate. When dealing with autonomous decision making systems fueled by A.I. considering morality systems or security as an afterthought can be a trigger for another A.I. winter. 7
  • 9. Finally. We need to cultivate ourselves. If algorithms learn from humans, then we’re about to give birth to the first tax avoiding, chain smoking, wife beating, cussing badass chatbot we’ve ever seen. Oh, wait… As humans, we live in an age where everything is recorded and in the open, and everything can be used as a training set. Yet we still behave as brutes in our online lives. So, did we really expect anything else from Tay? 8
  • 10. 9 But the opportunities to do good are everywhere!

Editor's Notes

  • #2: When we talk about Artificial Intelligence, or A.I., it seems as if we are witnessing a Cambrian explosion as the online press continues to load A.I. as the silver bullet in the chamber too slay many, if not all, problems we are facing in our society today. The reality, however, is still far from the expectations set by Hollywood and thought leaders. While being important stepping stones in the general evolution of machine learning, solving complex games is a completely different discipline compared to solving real life problems, such as poverty or environmental issues. I would like to use the next 15 minutes to address some of the challenges ahead of using AI in building a better world.
  • #3: One popular ideology is how A.I. could very well be our final invention and might hold an existential threat. While I personally don’t believe an embodied A.I., such as a Terminator, will one day knock down my door. I rather fear how algorithms will gradually and invisibly influence our lives so that we evolve into a docile and complacent species as portrayed in the movie Idiocracy, where critical thinking has been outsourced to machines. So before we stop worrying and love the A.I., let’s acknowledge some of these threats.
  • #4: Asking about the future threats of A.I. requires deep thinking on weaponized A.I. But in all fairness, we are witnessing this already. We are at the center of a perfect storm of power hungry journalists, and social networks that function as echo chambers for billions of people. Combined they create the perfect theatre for Psychological Operations and Information Warfare. Profiled through Like buttons and billions of sensors, millions of online users are offered “personalized” news that in turn influence and amplifies the beliefs of others as their own. In an age where advertising agencies and intelligence agencies share the same interests and technologies, we see a dangerous shift of power going to those that deliberately turn profiling, targeting or recommender systems, into weapons of mass influence. We can therefore consider A.I. as a dual use good, and therefore subject to export regulations as controlled by the Wassenaar Arrangement, however is widely misunderstood and as such remains ungoverned. Pushing the fast forward button, it is not an act of clairvoyance to envision new modus operandi emerge where a whole new generation of criminals will use the learnings of A.I. and add it to their growing arsenal of cyber weapons. After all, winning a game of Space Invaders or winning and exploiting a race condition in software is in view of reinforcement learning a very similar challenge.
  • #5: But without Evil, there cannot be Good. As A.I. researchers we are primarily driven by curiousness. But we should equally embrace ethics and morality as cornerstone values in our profession. It is our duty to denounce law or politics in this discussion, as they do not guide our moral compass. Law is about compliance enforcement, that is culturally and geographically bound. Law does not judge on what is good and what is evil. You can sometimes do things that are entirely legal yet highly unethical. What’s more; A.I. is a global technology, capable of addressing global issues and doesn’t let itself be bound to one nation, race or belief. Let’s not be mistaken. Choosing for ethics and morality may one day very well put us in position for civil disobedience, in ways far more extreme than what we see happening with, for example, cryptography. You! The people - currently working in the field of A.I. carry a far greater burden and responsibility when it comes to creating a better world of tomorrow!
  • #6: Action number one - Subscribing to a code of conduct. In many industries, following a code of conduct is a normal thing. The most well-known one is the Socrates oath taken by medical doctors, for example. Ensuring the impact of AI technology is positive, doesn’t happen by default. But apart from the Ethics Advisory Panel, The Partnership on AI, and an early announcement of OpenAI, little else seems to be going on in the industry about the topic of ethical governance. Perhaps it’s because it’s a less interesting topic to cover by journalists and bloggers, or simply it’s because we’re too absorbed to keep shipping software. As such I strongly support an industry wide body of ethics that all A.I. researchers should subscribe to. Put simply, we need to take the standards by which artificial intelligences will operate just as seriously as those that govern how our political systems operate and how are children are educated. But let one thing be clear. A global push for a code of conduct will introduce a next step in the maturity of our industry. Without we allow future evil to take root in our work of today.
  • #7: And this already starts at the very beginning of your machine learning pipeline. The phase where you gather and clean training data and prepare test data. So, action number two – Ensure training data is free of bias I think we can agree how machine learning follows the principle of monkey see, monkey do, and so we must make sure we enter the right data before we worry about dimensionality reduction or feature calculations. Most of today’s labeled datasets contain large amounts of cultural bias. Bias that in turn might lead to racist or sexist classifications and predictions. And if we want to do good for the world, we are morally obliged to zoom in on this first step and ensure we remove as much as possible of any bias that might turn our profiling or classification solutions, into systems that might predict who has an increased chance of exposing criminal behavior and gets flagged for proactive surveillance. Too dramatic? Take an honest look at the state of affairs in our world, and you be the judge.
  • #8: Action number three – Adopt a model explanation system While I earlier called to denounce law and politics in the debate of morality, the European General Data Protection Regulation, or GDPR, takes a bold step forward to ensure companies don’t run amok with our data, and provides comfort that our Human Rights know an extra layer of protection in this digital age of machine learning. This GDPR not only covers data minimization, but more importantly forces companies to explain their predictions. As explainability is not a property of a model, it forces data scientists to deal with a new explainability-accuracy tradeoff. Here, we assume the paradigm where prediction accuracy is of paramount importance, but explanation is also important. Historically, this dilemma has led to two approaches: 1) the “interpretable” models approach, common in scientific discovery/bioinformatics (or so called white box), and 2) the accuracy-focused approach, common in computer vision with methods like deep learning, k-NN, and SVMs (or, so called black box). This is a false tradeoff. We must separate these concerns of predictive power and explanation generation, and work towards a formal framework in which explanations can be generated for black-box classifiers, without assuming anything about the internal workings of the classifier. Much like you would post-rationalise the moves of someone after a game of chess. I would implore all of you to read the paper of Ryan Turner about this topic, as there is an important key here to embrace the GDPR.
  • #9: Action number five – Building morality into algorithms The rise of A.I. is forcing us to take abstract ethical dilemmas much more seriously because we need to incorporate moral principles into algorithms. Should a self-driving car risk killing its passenger to save a pedestrian? To what extent should a drone consider the risk of collateral damage when killing a terrorist? There is simply no clear answer to these questions. Why do we have ethics and morals? What is their function? Let’s define an agent as a being who has beliefs and desires, and who chooses actions based on those beliefs and desires. Different agents often have incompatible desires, which leads to conflict. The function of ethics and morality is to resolve conflict among agents; to facilitate cooperation among agents. One agent might be safely sitting in a self-driving car, while the other pedestrian agent might be crossing the street at the wrong time… So. Then what happens? A common theme in the A.I. community is to formulate a scientific approach to ethics and morality systems. An ethical system is an algorithm that an agent uses for making decisions in the context of other agents, when there is the potential for conflict or cooperation with the other agents. Our ethical algorithms have biological and cultural components, which have evolved by biological and cultural evolution. Science can help us to understand the evolutionary origins of our ethics. However, while OpenAI announced to put morality into its algorithms, little evidence or useful scientific research is available to work with at this moment.
  • #10: Quite often artificial intelligence holds us a mirror A.I. is as much about mathematics, as it is about philosophy, psychology and many other domains in science. Exploring machine learning means many times we are exploring our own human nature. That’s why I find this domain so wildly fascinating. And I think many of you do too. In many ways training algorithms, bare great similarities to raising and teaching children. Anyone of you that has cursed in front of a three-year old knows it will haunt you for many years to come. So. Was it really that big of a surprise when Microsoft’s Tay experiment turned racist within 24 hours of learning from Twitter feeds? It was both a hilarious moment. And a sad one. With most data generated by humans, we should take care not to continue to act as primitive brutes, often uninhibited by a veil of anonymity; banging away, launching slurs of profanity on our keyboards. As pervasive as artificial intelligence is set to become in the near future, the responsibility rests with society as a whole, as the economic value of human traits such as empathy will increase as automation will shift the nature of society. If we want artificial intelligence to embody the proper values, we need to shape up. Because how will machines know what we value if we don’t know ourselves?
  • #11: But the opportunity to do good is everywhere. A steady stream of advances—mostly enabled by the latest machine-learning techniques—are indeed empowering computers to do ever more things, from recognizing the contents of images to holding short text or voice conversations. These advances seem destined to change the way computers are used in many industries, but it’s far from clear how the industry will go from recognizing cats to tackling poverty and climate change. Artificial intelligence will need a few more major breakthroughs before we reach levels of intelligence that will start to show promise it can help us solve some of these world’s largest problems. Today, many people have the wrong idea about the current state of artificial intelligence, as they see one cool example and extrapolate from that onto other domains. Put simply, today, our imagination far exceeds the practical possibilities. But isn’t that a good thing?
  • #12: Although we might shrug off the current state of artificial intelligence as merely a hype; another hayday; another summer of A.I; we might benefit from cultivating this atmosphere of opportunity and imagination. Shouldn’t our industry continue to attract bright engineers, and stimulate them to think beyond what’s possible today? What would happen if the brightest minds have the strongest dreams; wide eyed looking over the horizon; dreaming up a future that might sound crazy today, but in fact fuels the innovation required to one day turn it into a reality? Will A.I. be able to help doctors fight cancer? Will we be able to lace our natural brain with artificial intelligence? Will we find new ways to grow as humans when we’re faced with the abundance of free time? And while artificial intelligence will be in no way a solution to our often-human stupidity, the promise of a shaping a totally new future is not as crazy as it might seem. There are very few moments in history where we can play a pivotal role as a race. And so, if we believe that A.I. is indeed our final invention, then we have a strong moral and intellectual duty to ensure it is used for the greater good. This future, however, will be entirely in our hands…