What's Missing From AI Engines? Tacit Knowledge
LinkedIn has recently started posting some AI-generated articles on topics like Product Development, Project Management and probably many others I'm not following. They then offer the opportunity for some of us to add additional "insights" from real people like me in an attempt to flesh them out.
I've been pretty critical of these articles. From the start, they hit me as being . . . off. The information seemed more than just formulaic. It seemed to have some major holes. I could see immediately that they were wrong.
I knew they were wrong but I didn't know why.
As a person with many successful product launches under my belt, I could see immediately that the advice in the articles was cringe-worthy but I had a hard time explaining why.
I've been either running product development teams or coaching them since a few months after graduating from college. That's when my boss learned that I had worked my way through college at a construction company, managing small projects of my own by the time I was a senior. "OK! You're our new project manager!" But managing a complex software project was nothing like running a bathroom remodel.
At first, I believed in some of the practices the articles have been recommending — but that was only at first. As a person working on early Internet application development, I quickly ran into the limits of Project Management 101 in the face of all the uncertainty and complexity that's inherent to innovation.
I've spent the rest of my career trying to develop better ways of working, and then sharing what I've learned with others. It's not just that I disagree with most of the conventional wisdom about how to run projects in product development — I feel, viscerally, how WRONG most of it is. I've internalized my experiences to build a wealth of tacit knowledge.
AI Engines Lack Access to Tacit Knowledge
After reading more of these articles than I care to admit this week, it seems to me that the AI engine lacks tacit knowledge, and therefore doesn't know, at a deep level, whether or not something is right or wrong. It's only making predictions based on explicit knowledge.
Michael Polyani first defined explicit and tacit knowledge in his book, The Tacit Dimension in 1966. His work was extended by Hirotaka Takeuchi and Ikujiro Nonaka in a number of books and articles that characterized the types of knowledge.
Tacit knowledge is all the knowledge that gets embedded in our human neural networks in ways that are difficult to put into words: the feel of properly-kneaded bread, the sound of a well-running diesel engine, the way a seasoned technician can tell that something just looks wrong and an artist knows that something is finally just right.
Explicit knowledge is the knowledge that can be written down — or represented in some other way, such as a visual model. For bread, the explicit knowledge is in the recipe and kneading instructions, a video of the kneading process, a photo of the freshly-baked loaf cut in half to expose the crumb. It's also the data: the weight of the bread loaf, the density of the crumb, the gas retention, the water absorption.
Some researchers claim that as much of 80% of human knowledge is tacit knowledge, which must be either externalized or shared via a common lived experience.
It's Hard to Convert Tacit Knowledge to Explicit Knowledge
We can convert tacit knowledge to explicit knowledge by writing it down or expressing it in some other way — we externalize it. This might also mean trying to define measures to know when bread dough is ready, like elasticity or color. But it's not easy to do that, since tacit knowledge is hard to put into words.
We build tacit knowledge through lived experience, sometimes from applying explicit knowledge and then observing the results, a process called internalization. A master baker uses a standard recipe, but the motions of preparing the dough have become automatic.
Of course, we can easily share externalized knowledge through a variety of media. We can also share tacit knowledge directly through shared experiences: the master baker placing their hands over the hands of their apprentice, patiently adjusting the kneading motion until its just right, or the father holding on to the back of his kid's bicycle and then letting go.
Tacit Knowledge Helps Us Manage Complexity
In 1995, Hirotaka Takeuchi and Ikujiro Nonaka wrote The Knowledge Creating Company that described the importance of tacit knowledge in performing complex activities like innovation and product development.
This has also long been recognized by quality programs, like Lean and Six Sigma. They talk about "going to the gemba" — the actual place — and observing what happens with one's own eyes rather than relying on secondhand information. They see the machine operators as the real experts on the production process, because the operators are the ones who execute the process day after day, building a wealth of tacit knowledge.
We often say that industry gurus have forgotten more about their fields than anyone else will ever learn. But that knowledge hasn't been forgotten — it's just gone unconscious.
Experts Have Unconscious Competence
Driving instructors have long known that drivers' ed students pass through stages of learning, from thinking driving is easy (unconscious incompetence) to realizing that it's hard (conscious incompetence) to driving OK as long as there are no distractions (conscious competence) to driving as most of us do, without thinking about it consciously much at all.
Our minds automatically know how to adjust the steering wheel and when to turn on the turn signals without our conscious awareness. We've built up enough tacit knowledge to become unconsciously competent. We don't know how we know to do those things and in fact, if we think about that too much, we can get stuck.
That master baker just knows a lot about bread: how much water to add to get the right consistency, how to knead, how to shape loaves, how to recognize when a loaf is ready for the oven. When they try to put their knowledge into words, it comes out like this: "The dough should be smooth and elastic, and slightly stick to a clean finger, but not be overly sticky."
Or the master baker could just have the apprentice knead bread until they say it's right, over and over, until the apprentice knows when it's right on their own without thinking. The master's tacit knowledge has become the apprentice's.
AI Engines Use Our Explicit Knowledge to Build Tacit Knowledge
I'm far from an expert on how artificial intelligence engines learn. I know they get "trained" on large datasets - text for ChatGPT and images for DALL-E 2. I know that in the process of this training, they recognize patterns in these large datasets that are hard for humans to see, and then utilize these patterns to produce text or images in response to prompts.
I don't need to know the details of how the training works to know that all that data, in whatever form, is explicit knowledge by definition.
It also seems to me, although I'm less sure, that the engine is essentially internalizing this explicit knowledge to create a form of artificial tacit knowledge. And this makes these engines seem like "black boxes" to us, because we can't see inside, just as the apprentice can't see inside the head of their master.
AI Engines Improve with Externalized Feedback
Then our prompts drive the engine to externalize that knowledge in the form of its text and images. Then our explicit feedback helps the AI engine learn. But the AI engine can only see our feedback — not the visceral "NO" that comes from our tacit knowledge — and our externalization is often clumsy: "that hand looks . . . weird." "Hey, LinkedIn these AI-written articles are just . . . wrong."
AI engines can overcome this limitation if the goal is clear enough, as it is in the games of Go and Chess. But when the evaluation of "rightness" is itself tacit — even visceral — it's hard to see how we can depend upon an AI engine to produce results better than a master's.
AI Engines Lack Access to Our Tacit Knowledge
The AI engine doesn't know why Vincent Van Gogh made the artistic choices he did, and Van Gogh couldn't have told anyone why. He just knew when a painting was right, and despaired when it wasn't. So an AI engine can train on a database of paintings and other images, to produce an image of the Brooklyn Bridge in the style of Vincent Van Gogh.
But I'd be willing to bet that if Van Gogh had been alive to paint the Brooklyn Bridge, it wouldn't look anything like the AI-generated image. He'd take one look at that image and mutter, "DAT klopt niet!!" under his breath, then paint something amazing, something surprising, pulled out of the parts of his artistry that the AI engines couldn't access.
Yes, but Van Gogh was a genius.
I write nonfiction about product development, mostly Agile Hardware Development. So you would think that the articles I write would be easy to replicate. An AI engine could read all of my books, articles, etc and spit out an article in my style, probably getting most things right.
But the engine only has access to the knowledge I have externalized. And that represents only about 10-20% of the knowledge I actually have, at best. So chances are, I'd read it and think, "Well, THAT'S not right!!" under my breath, and go on to write an article of my own.
And while I've written heaps about tacit knowledge, explicit knowledge and the importance of knowledge capitalization in product development over the years, no AI engine could have predicted that THIS article is the one I would write. Because neither could I.
Founder & CEO at Rapid Learning Cycles | Accelerated Technology & Product Development | Empowering Innovators to Shape the Future by Getting the Right Products Out Faster
2yThanks for responding. I’d based this more on the extension of the theory from Nonaka and Takeuchi than on Polanyi’s earlier work, so I’m using their definitions not his. They researched how tacit knowledge worked in companies to produce superior results, partly by looking at the limitations of explicit knowledge in transferring skills on a manufacturing floor. That there’s a tension between having a standard process and all the white space in between the steps that has to be learned through experience. The more complex a task, the more white space there is, which is why those articles about product development project management, consulting, etc are so superficial. How could they be anything else? The white space isn’t written down. Because a lot of what Takeuchi and Nonaka are talking about when they refer to tacit knowledge isn’t tradition - it’s lived experience, subjective evaluation, kinesthetic memory and the flashes of inspiration that come from putting things together in surprising new ways. An AI engine can surprise us by recognizing patterns we couldnt’ see but everything must necesssarily be derivative. I doubt AI will produce its own equivalent to Picasso or Mahler or Hemingway.
Engineering Manager
2yThank you for your thinking. Polanyi defines Tacit Knowledge as Knowledge of Tradition. For him, the concept of personal knowledge is a combination of subjective experience and collective rules for action embedded in various traditions. Tradition is passed down through the concepts of language. So these implicit rules can, in principle and if necessary, become explicit. So nothing seems to prevent AI to 'learn' these rules, through 'apprenticeship' ... and becomes the 'master' in its turn. Although knowing how to perform a task is not the same that knowing how to explain how the task is performed. (only in recent years Polanyi’s original concept of Tacit knowledge is reinterpreted to be understood as something which is inexpressible.) Note that tacit knowledge is not only a species of explicit knowledge. Yet explicit knowledge plausibly entails belief, tacit knowledge does not (see also the Gettier Problem).