Reasoning and reflex
I didn’t learn to drive a car until I was in my mid-twenties, and I found it difficult. It took me three attempts to pass my test, and a lot longer before I was a confident driver. But, as well as teaching me how to drive, the experience taught me the difference between reasoning and reflex: the difference between knowing what you should do, and doing it automatically due to muscle memory.
Later, I learnt the management lesson that, when we acquire new skills, especially those with a physical component, we pass through phases of unconscious incompetence (we don’t know what to do), conscious incompetence (we know what to do but we can’t do it), conscious competence (we know what to do, and we can do it when we think about it), and unconscious competence (we know what to do, and we do it automatically).
I think that this model of learning can help us think about AI and when to use it. One of the temptations with AI is to let it become a golden hammer: the answer to every problem. Need to write an email? AI. Need to plan a meeting? AI. Need to build some code? AI. Need to check eligibility for a mortgage? AI. And so on.
When AI models have the ability to respond to natural language, and natural language is our medium of thought and discourse, it is hard to think of any problem that we could not describe in a way where AI could give us a response. It seems that all our work could become the work of describing tasks in a form which we can feed into AI models.
And yet, I think we should remember the difference between reasoning and reflex.
Computing has been a discipline of compromise since its earliest days. We have had to choose between accuracy and speed, between ease and performance, between fidelity and storage. Every generation of architects, designers and operators has had to accept that they cannot have everything, everywhere, all at once – they have to make choices, and those are usually choices about resources.
Even though many AI models are hidden behind layers of abstraction, they still consume resources: power, money and time. Their inaccuracies also consume the resources required to detect, correct and repair them. Making a call to an AI model is significantly more expensive, in all these dimensions, than running some lines of standard procedural code.
That means that, as we learn to design and build systems which incorporate AI, we will need to figure out how to distinguish between those times when it’s worth invoking a model, and those times when it’s best to just write some code. For example, in the world of banking, when processing a mortgage application, it is probably better to engage with a customer in natural language, using a model, than forcing them to squeeze their needs and hopes into a structured form. But when calculating interest payments, it is better to simply perform a calculation using traditional program, than asking a model to work out how to do arithmetic.
This optimisation won’t just apply to the use of AI models: we will also have to decide where human reasoning adds most value to the lifecycle. Just as calling an AI model is more expensive than calling some procedural code, human thought is more expensive - and more valuable - than automated processing. It will become part of our jobs to determine where it is best applied.
Whenever we invoke a model, whenever we ask a human to do some thinking, we are choosing to incur the cost and get the benefits of reasoning - or, at least, something that looks like reasoning from the outside. Sometimes this is the right thing to do: when we are solving problems, or when we need to connect with another person. But sometimes it is better to rely on reflex - physical or mental muscle memory which we have learnt, internalised or made automatic - or classical procedural instructions encoded into binary and executed at the speed of electrical impulses.
We are still learning how to use AI. Part of the journey will be making decisions between conscious competence and unconscious competence, and the techniques we use to achieve each - all while attempting to consign all forms of incompetence to the development and testing cycle.
(Views in this article are my own.)
Author| AI Expert/Consultant| Generative AI | Keynote Speaker| Educator| Founder @ NePeur | Developing custom AI solutions
2wI learned driving from three different schools, even after getting my license for 6 months a hired a driver to accompany me in all my drives - because a car is big responsibility it has power to kill someone!!! And AI in decision process it is nuclear power, if not wielded wisely we may not see the humanity as we know it. 🙏 Thanks for raising this issue David Knott
Chief Data Officer | AI Governance Professional - IAPP Certification | Digital Innovation | Energy Transition | PhD Process Engineering | inspired by technology & driven by value - smiling as default setting!
3wGreat perspective David Knott and fully agree! Love the analogy with driving - believe there are plenty of lessons to be shared towards ethical and responsible AI
I blog about bias and unconscious bias fairly frequently. In my view and this is hard hitting. I would say too many people are acutely aware of their bias behaviour. This mindset will be dangerous in the world of AI. Ethics becomes invisible so to does morale and guardrails start to be over configured.
Chief Architect
3wGreat insights David, My biggest concern is that we will utilize AI at the detriment of allowing people to learn and grow. Our minds are one of our greatest muscles, and to stop using it, or allowing AI to do most of the hard work for us, is just going to allow our minds to atrophy. I am already seeing a significant lack of knowledge and understanding of why something should be done a certain way in current junior programmers and I am sure it is not restricted to IT only. You cannot blindly trust what AI tells you. It needs to be a tool in our toolkit, not our replacement. As you said, "whenever we ask a human to do some thinking, we are choosing to incur the cost and get the benefits of reasoning" along with the benefits of learning for ourselves, instead of having the AI do our homework 'Work' for us. How else do we improve ourselves and our minds? Also, Innovation comes from thinking, not from AI thinking for us. And true thinking requires a solid foundation of knowledge and understanding from which to build on. I hope the trend of short-sightedness for next week, next month, next quarter doesn't destroy our future.
Insightful, thank you David