AI, the “invisible hand” on our decision making process
Photo credit - Geralt on Pixabay

AI, the “invisible hand” on our decision making process

Few days ago, I heard the SoftBank CEO, Masayoshi Son, stating on CNBC that people should brace themselves for the proliferation of artificial intelligence as it will change the way we live within three decades. And he is absolutely right but do we really measure the impact it will have on humanity ? Stephen Hawking, Tim Berners-Lee  or even Elon Musk have explicitly voiced concerns. With his primary chess program, the AI pioneer Alan Turing anticipated early 1950s that machines would “take control.”

Financial institutions, as many industries, are experiencing the rapid transfer of human output to robot output. The sector is digitizing because we are seeking low friction and immediacy. Anything that can be automated will be automated. Among the hype and unavoidable buzz, some voices claim that humans will be inevitably replaced by our AI-enabled robot overlords in a “Skynet takeover” [1] like.

I believe we are reaching the tipping point where data analytics and machine learning embedded in applications will replace reports, dashboards and other people-oriented output as the primary consumers of data. Software will be empowered to act on data for us whether it is machine-to-machine or machine-to-consumer rather than simply surfacing the data for people to examine and use to make decisions.[2]

Coming on little cat feet, AI has grown ubiquitous in financial services. Computers are already proficient at picking stocks, managing assets, identifying customer churn, providing clients with insight into their income & expenditure, assessing credit, reading documents through OCR and their reach has begun to extend beyond computation and taxonomy. This will have deep implications not only in terms of technology but in the fundamental nature of how people make decisions.

But do computational systems state the truth ?

AI is displayed as an organ authorised to diagnostic the reality in a more reliable manner than we would do ourselves as well as revealing some dimensions shadowed by our consciousness. A large part of the algorithm science borrows an anthropological path using human skill in enabling situation assessment and draw up conclusion.

 AI is good at solving specific tasks but it does not have a sense or awareness of self. Consciousness is not going to emerge out of a system that is narrow in his predictions. Humans have the ability to construct counterfactuals, to imagine any kind of ‘what if’ scenario – we are able to think out of the box. AI is able to generalize by absorbing lots of data and by 'transfer learning' could create things never seen before. Deep learning is merely a matching pattern. It is a correlation of neural networks. “But what about real intelligence ?”as put it my wise friend Nicolas Lebard who inspired me in writing this article. 

One may argue that AI is more a technical principle than an innovation. It is a robotic analysis of diverse ordering, a real time equation, generally to execute action accordingly done by either human acts or in an autonomous mode by systems themselves. Are human intelligence and machine intelligence the same? Is the human brain essentially a computer? Unlike the Dartmouth proposal where "Every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it”, I am not convinced a machine would ever have a mind, mental states, and consciousness in the same way that a human being can. To be a bit provocative and cartesian as Larry Tesler, the primary inventor of copy/paste, the human intelligence “is whatever machines haven’t done yet.” The scientific approach to this debate depends on the definition of "intelligence" and "consciousness" and exactly which "machines" we are referring to.

For more than one century, IT enabled the data storage and management. With the digital era, there is a clear change – less informing, more orienting human action. Digital technology dictate the tempo of our living. We see emerging “an automated invisible hand”, a world sorted through a feedback regime, a data driven society. The so-called ‘AI Spring’ is now in full bloom, ignited and alighted by the migration of the people’s data into the digital universe. Those Aletheia (after the Greek word for truth) mechanisms relentlessly more complex are steering to enforce their law, influencing the human business at different level : incitative, prescriptive, coercitive. [3]

A number of AI scientists make assertions about veracious intuition and values and they usually seem to assume that there is a ‘truth’ or ‘right answer’ to every debate. It is not the case when we engage ethical consideration. Social scientists would assist to mitigate bias, enhance fairness, bolster accountability in the intent of strengthening ethics. A credible and effective governance structure should be set up as a framework. Failing to do so allows AI developers to push innovation with potentially severe consequence for humanity in a black box.

It is important to identify those bias at the outset because if we do not get it right at the foundation, you can extrapolate and see the catastrophic mess it could begets. There is so much to cheer about the AI technology that a balanced way of looking at it, is to glorify what is so special about it while being mindful of the shortcomings. The 2 elephants in the room are privacy and explainability.

Privacy should be the bone-chilling concern. Though it is clear to enjoy AI benefits, we have to overcome the stumbling blocks linked to it. Techniques showing promises such as deep learning only work when you can use all of the data and you do not make any presumptions about what data is relevant because the algorithm can actually surface the relevant variables and covariance that matter. We need to define some sort of backstops to preserve somehow individual sanctity. I have not made up my mind yet if privacy rules such as GDPR in Europe or PDPA in Singapore effectively impedes the use of data for AI beneficial purposes.

Regarding explainability, how do we understand a decision done by deep learning ? Those systems lack transparency. They can quickly lose humans in the complexity inherent to the algorithm and many of them contain an imprint of the unconscious biases of the scientists that helped to develop them. So, do we accept a low performance AI or not full AI so we can have explainability or do we have high performance AI but limited explainability ? My bet is that we would predominantly choose high performances but I do not see that choice as sustainable on the long run since there is a genuine risk of losing control. It reminds me my years where we were outsourcing at full blast the IT, the processes and unintentionally the knowledge. So the day you need to amend a process for effectiveness, policy or regulation, you are in trouble. Likewise, with AI introduction, you run the risk of not being able to explain outcome (you would rely solely on interpretability) and you will struggle to amend processes accordingly. If a financial institution starts to leverage on AI to detect sophisticated financial cyber-crime for instance in detecting anomalies real time and in reducing false positives, they need to be able to explain how the filtering is done and the hits management. So you do need the causal inference and cannot just rely on the correlations.

AI can only bring value if we make sure it serves a wider purpose for our clients (ie. provide a seamless, affordable and continuous experience and service that empowers them) and for our own people (ie. support the development of human intelligence and vision). Though there is a number of things that robot will do better than humans thanks to AI, we should stop opposing humans and artificial intelligence. With ethics and social responsibility, the relationship between human and robots should be seen as synergistic.



(1)   The Terminator directed by James Cameron, 1984.

(2)   Karthik Ramasamy, 2019 Data Predictions: Demise Of Big Data And Rise Of Intelligent Apps, Forbes, February 22 2019.

(3)   Eric Sadin, L'intelligence artificielle ou l'enjeu du siècle : Anatomie d’un antihumanisme radical, 2018.

(4)   “Superior Intelligence.”, The New Yorker, May 14 2018.


Muneeb Sikander

Economist | Management/Strategy Consultant | Startup Mentor

4y

People have forgotten what mathematical reasoning is and now are persistently chasing data or analytics or analytical reasoning instead. Alan Turing said: 'Mathematical reasoning may be regarded rather schematically as the exercise of a combination of two facilities, which we may call intuition and ingenuity' Companies need to promote unorthodoxy and divergence from legacy belief systems if they hope to build more competitive strategies where real competitive advantage exists. Firms spend the majority of their resources applying ML to known or simple problems. To avoid irrelevance requires a convergence of an in-depth technical understanding of , systems thinking, and intellectual value creation embedded - a rare thing among the managerial elite in legacy organizations. Chandler

Like
Reply
Arnaud Berrier

CBTW Partner | Sales leader Europe, Software Engineering

6y

Today, AI and human thinking are compatible, a well combination is the key of success. Each has a complementary mission, out of the box mixed with process automation 

Like
Reply
Marie-Agathe de Place

Strategy and Project Management

6y

Nice article Guilhem! Yes, AI will change our daily lives in the next decade but we are still far from a general AI and even more to singularity. Rather than fearing this hypothetical future, we should address the current and growing challenges and you named them: ethics&bias, data privacy and transparency. Deep neural networks are black boxes by nature and this is a completely new situation we need to deal with. However, I agree that AI will only take the space that human will leave to IT. AI is an efficient support to take decisions, great! But let’s not give up our ability to take the final decision, let’s not transfer our accountability. Accountability and moral sens, that’s the gap AI will never close.

Prashant Verma

Co-Founder | Industry 4.0 | AI & IIoT | Product

6y

AI is like a child, learning from his parents, the human. It's our responsibility to provide him an ethical environment, to teach him what is wrong and what is right. If any AI app is charged with some crime, his parents, owners should be punished. Great power should come with great responsibility now.

To view or add a comment, sign in

Others also viewed

Explore topics