AI terms beyond the buzzwords

AI terms beyond the buzzwords

By Ian Thorp 

We’re a couple of years into the mainstream adoption of Artificial Intelligence. The speed of adoption shows no sign of slowing down and neither does the number of new terms and acronyms. At Trideca, we understand that keeping up with these new concepts can be challenging. That's why we've created this handy cheat sheet to help you understand some of the key terms in the world of AI. 

Whether you're unsure about the difference between a Vector Database and a Model Context Protocol, or you're simply looking to brush up on your AI vocabulary, our guide is here to help. 

Warning: May contain inaccuracies, personal opinion, and questionable analogies. 

Foundational Concepts 

AI (Artificial Intelligence): Software that mimics human intelligence to perform tasks like reasoning, learning, or decision-making. If you believe the hype, it’ll solve all your problems for free in an instant and replace you by the end of the week. AI is providing a whole new set of tools to help you be creative.  When someone buys an electric screwdriver, it doesn't replace them; it simply makes them more effective at assembling Ikea bookcases. This tool enhances the user's capabilities, allowing them to complete tasks more efficiently and with less physical effort. 

Artificial General Intelligence (AGI): Refers to a theoretical AI system that could match human-level intelligence across various tasks.  It's often used in headlines of sci-fi stories, but it doesn't actually exist, or does it? 

Neural Network: A system modelled after the human brain that processes information using interconnected nodes (neurons). 

Machine Learning (ML): A subset of AI where systems improve performance by learning from data, instead of being explicitly programmed. It’s only as good as the data it’s trained on; A model trained to recognise fruit is going to struggle to create a business strategy, even if it involves a lot of fruit. 

Deep Learning (DL): Branch of machine learning that uses artificial neural networks with many layers to automatically learn complex patterns from data. For example, being able to tell a small dog from a big dog, but after a time being able to tell the difference between a Spoodle and a Groodle. 

Model: AI system trained to perform a specific task. There are thousands, if not millions of them, with new ones popping up all the time from both major IT organisations and emerging companies. 

General AI Models: Have broad capabilities designed to tackle any task and act as a general AI Assistant. The disadvantage of these models is that they have broad, rather than deep knowledge, a bit like an Enterprise Architect.  The key ones to be aware of include Llama from Meta, Gemini from Google, GPT from OpenAI, Claude from Anthropic and Grok from X. Things move very fast in the world of AI, with updates being released almost weekly with questionable naming conventions.  Expect frequent conversations on the likes of ‘Llama 4 Behemoth’ and ‘Open AI GPT-4o mini.’  

Transformer: A deep learning architecture especially good at understanding language by capturing context and relationships. It’s like that annoying well-read friend (you know who you are), who can remember the content of every book they’ve read. 

Natural Language Processing (NLP): The ability of a computer to understand, interpret, and generate human language. For financial advisors, NLP can enhance customer interaction through chatbots or automated report generation. 

Language Models & Generation 

Large Language Model (LLM): An AI trained on very large quantities of text to understand and generate ‘human-like’ language. They work by predicting the most likely next word in a sequence, based on patterns learned during training. Due to the volume of data exposed to LLMs, they are frequently prone to delivering wrong information or appearing to make stuff up by conflating data. Put simply it’s overachieving grandchild of Autocomplete. 

Small Language Model (SLM): A compact AI model that can be run on local devices like laptops, phones and devices without the need to consume and entire data centre’s worth of compute power. 

Hallucinations: When AI confidently generates false or fabricated information with the intent of providing an answer rather than saying ‘I can’t answer that.  For example, ‘The first functioning AI was secretly developed by a group of rogue hamsters running a network of quantum computers hidden inside an abandoned underground bunker.’ 

Foundational Model: All foundation models are AI models, but not all AI models are foundation models. Many AI models are designed for specific tasks from the beginning (like image classification or fraud detection) and trained on more limited, specialised datasets with full supervision. Foundation models: 

  • Are trained on massive, broad datasets rather than narrow, task-specific data. 

  • Develop general capabilities that can be adapted to many different downstream tasks. 

  • Use self-supervised or semi-supervised learning on unlabelled or partially labelled data. 

  • Serve as a base that can be fine-tuned for specialised applications. 

Specialised AI Models: Designed to perform specific tasks and have deep knowledge or capability in a particular area. Examples include Dall-E from Open AI that specialised in image generation; Nova Sonic from Amazon that specialises in speech; and MedLM from Google that is focused on healthcare. 

Reasoning Model: A system that rather than immediately producing an answer, processes information methodically and step-by-step, deliberately working through the problem, considering multiple angles and checking its own logic. This approach helps reduce errors and improves performance on complex tasks. 

Retrieval-Augmented Generation (RAG): A technique in natural language processing (NLP) that combines two key components:  Finding relevant information from external sources and generating responses using a language model. This improves accuracy by using current knowledge. It's commonly used in applications like chatbots, search engines, and knowledge assistants. 

Chunking: The process of breaking down large pieces of information into smaller, manageable sections. It helps organise data for better retrieval and understanding, improving efficiency. It's especially useful in frameworks like RAG, where structured data improves the quality of AI-generated responses. 

Prompt Engineering: The skill of writing clear instructions for AI systems to get desired responses. It involves strategically designing inputs with appropriate background, examples, and limits to guide the AI toward producing more accurate, useful, or creative outputs / results. The better the prompt, the better output from the AI. Until AI becomes easier to use, it’s also the latest career choice for the technically savvy trendsetter. 

Tokens & Token Costs: The basic units of text that AI models process. Each token represents a character, a word or a fragment of a word. AI systems have a limit to the number of tokens that can be processed at once and the computer power and financial expenses associated with processing these units are referred to as Token Cost. As more tokens are processed (for longer texts or conversations), costs increase proportionally, which is why AI services often set pricing based on token count rather than word count. This is something to be aware of when working with very large documents or creating complex prompts. 

Agents & Autonomy 

AI Agent / Agentic AI: Unlike traditional AI systems that simply respond to inputs, agentic AI can take initiative, make decisions, and perform sequences of actions independently with minimal human intervention. 

Swarm: Multiple AI agents working in coordination to solve problems collectively. 

Agent to Agent: AI Agents interact and collaborate with each other to achieve a common goal or individual objectives. Agents can communicate, negotiate, and coordinate their actions without direct human intervention, showing new behaviours and problem-solving capabilities through their interactions. I’ll have my AI agent contact yours…’ 

Agent-to-Agent (A2A): A Framework for standardising communication between independent AI agents, built up existing web standards like HTTP. 

Model Context Protocol (MCP): A standard interface for AI models to access external data and tools.

Agent Experience Design (AXD): Creating effective and intuitive user experience when interacting with AI agents. The more intuitive the user experience the more accessible and the less the need for specialists – see Prompt Engineering. 

Data & Infrastructure 

Graph Database: Different to a normal database with columns and rows. This database is used to find connections and patterns in data. It stores things (like people, places, events) as nodes and the relationships between them (like ‘knows’ ‘works with,’ ‘is located in’) as edges

Vector Database: A way of storing and searching data, based on the meaning of the data rather than just the value. It uses mathematical representations, called vectors, to track everything. Data that is similar will have vectors close together. (You might need to ask a mathematician or data scientist). Anyway, this makes vector databases very good for recommendation engines and inferring. 

Knowledge Graph: A structured representation of the relationships of how things are connected and make inferences and answer complex questions a bit like a mind map. It would take a statement like ‘People in Melbourne like coffee’ and understand that Melbourne is a city, in a country. Coffee is a hot drink, made of beans etc. 

Composability / Composable AI: Microservices for AI–modular AI components that can be assembled into workflows. 

Perception AI: AI systems that interpret sensory inputs like vision and audio. 

Ontology: A structured framework representing knowledge about a domain for AI to understand and use. It helps AI understand the meaning behind data by organising it into categories, attributes, and connections, making reasoning and decision-making more effective 

Governance, Ethics & Safety 

Human-in-the-Loop (HITL): Incorporating human oversight and responsibility into AI processes for quality control and governance. 

Explainable AI (XAI): Techniques that make AI decision processes transparent to humans. 

Bias in AI: Unfair, unrepresentative or discriminatory results from AI. These are often the result of training data or algorithm design that is not sufficiently broad or diverse. Consider if you only visited a ski resort in summer, you’d have no reason to believe you need skis or a coat. 

AI Red Teaming: The process of attempting to break or exploit AI models. As with the any other application, an important step in the development and deployment of AI services. 

As AI continues to evolve fast, keeping up with these terms isn't just for tech geeks anymore. Whether you're just curious, or want to be the next AI prompt engineering guru, understanding these terms is becoming as essential as knowing how to update your smartphone. So, keep this guide handy, you never know when you'll need to casually drop "retrieval-augmented generation" into your next coffee shop conversation.

Welcome to the brave new world of AI where the future is now, and the jargon is just part of the adventure! 

To view or add a comment, sign in

Others also viewed

Explore topics