It's More Complicated Than It Looks
Photo by Crispin Jones on Unsplash

It's More Complicated Than It Looks

The LLM marketing scheme is the most brilliant bit of marketing I've ever seen. It's fear based. It creates doubt. It focuses attention on uncertainty. Fear, Uncertainty, and Doubt (FUD) is the tried and true Silicon Valley marketing strategy. This is the penultimate version.

It's simple:

  • For worker bees: "You are going to lose your job unless you learn how to use AI. That will be $20/month, please."
  • For CEOs and owners: "You are going to lose your company unless you learn how to deploy AI. That will be $20,000/month, please."

And, it's all focused on an AI parlor trick...Autocomplete on steroids.

Large Language Models (like ChatGPT, Claude, Grok, Perplexity) are a subset of one of the families of AI technique. There are branches. Each one can be classified as either deterministic or probabilistic. Deterministic tools always provide the same answers. Probabilistic models return the same answer sometimes.

The difference between the two is the degree to which they can eliminate uncertainty.

One way of thinking about LLMs is that they are an interface layer for processes that use the other forms of AI. One of the hallmarks of a LLM is that there will always be uncertainty about the quality of the answer.

Here are the major branches of AI ordered from most probabilistic to most deterministic.

  • Reinforcement Learning: Highly stochastic as agents explore random actions and learn through trial-and-error in uncertain environments with delayed rewards. The system learns optimal behavior by receiving rewards or penalties for actions taken in an environment, gradually improving its decision-making through experience.
  • Deep Learning: Neural networks with random weight initialization, stochastic gradient descent, and techniques like dropout create inherent randomness in training and sometimes inference. It uses multi-layered artificial neural networks to automatically learn complex patterns and representations from large amounts of data.
  • Machine Learning (General): Particularly unsupervised learning methods that discover hidden patterns, plus the inherent randomness in many ML algorithms and data sampling. Systems automatically improve their performance on specific tasks through experience by finding patterns in data without being explicitly programmed for every scenario.
  • Natural Language Processing (NLP): Deals with the inherent ambiguity and context-dependency of human language, often using probabilistic models. It enables computers to understand, interpret, and generate human language by analyzing text structure, meaning, and context. Large Language Models are a type of NLP.
  • Computer Vision: Processes noisy, variable real-world visual data with lighting changes, occlusions, and environmental factors creating uncertainty. It enables machines to interpret and understand visual information from images and videos, identifying objects, faces, scenes, and activities.
  • Multi-agent Systems: Multiple AI agents interacting can produce emergent and unpredictable collective behaviors even when individual agents are deterministic. It studies how multiple autonomous AI systems can coordinate, communicate, and collaborate to solve problems that are beyond individual agent capabilities.
  • Robotics: Operates in uncertain physical environments with sensor noise, mechanical variations, and unpredictable external factors. It combines AI with mechanical engineering to create physical machines that can perceive their environment and perform tasks in the real world.
  • Planning and Search: While some algorithms are deterministic, many use heuristics, approximations, and deal with incomplete information about problem states. These algorithms systematically explore possible sequences of actions to find optimal solutions to complex problems with defined goals.
  • Knowledge Representation and Reasoning: Can involve probabilistic reasoning and uncertainty handling, though formal logic components are deterministic. It focuses on how to structure and store information about the world so computers can use logical rules to derive new conclusions and solve problems.
  • Expert Systems: Primarily rule-based with deterministic logic, though may incorporate uncertainty factors and probabilistic reasoning. They capture the knowledge and decision-making processes of human experts in specific domains using if-then rules to solve problems and provide recommendations.
  • Robotic Process Automation (RPA): Follows strictly defined rules and workflows to automate repetitive tasks, operating deterministically according to programmed procedures. It uses software robots to mimic human actions in digital systems, automatically performing routine tasks like data entry, form processing, and system integration.



Good read, John. Thanks for the overview.

Like
Reply
David Turetsky

VP, Consulting @ Salary.com | Trusted Consultant | Host HR Data Labs® podcast

1w

This is a must-read to understand some of the complexities,in plain English. Thank you, John!

To view or add a comment, sign in

Explore content categories