Skip to content

AI Glossary

AI Glossary

35 key terms explained in plain English — from tokens to transformers.

Core Concepts

Large Language Model (LLM)

A neural network trained on massive text corpora to predict and generate language. The technology behind ChatGPT, Claude, Gemini and most modern AI tools.

Transformer

The neural network architecture (introduced 2017) that powers virtually all modern LLMs. Uses attention mechanisms to understand context across long sequences.

Token

The basic unit LLMs process — roughly ¾ of a word. ‘ChatGPT’ = 2 tokens. Context windows are measured in tokens.

Context Window

The maximum amount of text an LLM can ‘see’ at once. GPT-4o: 128K tokens. Claude: 200K. Gemini 1.5: 1M.

Parameters

The learnable weights inside a neural network. GPT-4 has ~1.8T parameters. More parameters ≠ always better.

Inference

Running a trained model to generate outputs. Separate from training — much cheaper and faster.

Fine-tuning

Further training a pretrained model on a specific dataset to specialise it for a domain or task.

Embedding

A numerical vector representing text, images or data. Similar meanings produce similar vectors — the basis of semantic search.

Multimodal

A model that can process multiple types of input — text, images, audio, video (e.g. GPT-4o, Gemini).

Open Weights

Model weights that are publicly released. Anyone can download, run and fine-tune them (Llama, Mistral, etc.).

Prompting

Prompt Engineering

The practice of crafting effective inputs to guide AI model outputs — system prompts, examples, constraints.

System Prompt

Instructions given to an AI at the start of a conversation to set its persona, capabilities and constraints.

Few-Shot Prompting

Providing 2–5 examples of the desired output format inside your prompt so the model learns the pattern.

Chain-of-Thought

Prompting technique asking the model to ‘think step by step’ — improves accuracy on reasoning tasks significantly.

Hallucination

When an LLM confidently generates false information. Not lying — it’s pattern-matching that produces plausible but incorrect text.

Temperature

A sampling parameter (0–2) controlling output randomness. 0 = deterministic, 1 = default, 2 = very random.

System Grounding

Anchoring AI responses to specific, verified source documents to reduce hallucination — the basis of RAG.

Agentic AI

AI systems that autonomously take sequences of actions (browsing, coding, calling APIs) to complete a goal.

Technical

RAG

Retrieval-Augmented Generation — retrieves relevant documents from a knowledge base before generating a response.

Vector Database

A database optimised for storing and searching embeddings by semantic similarity (Pinecone, Chroma, Weaviate).

Semantic Search

Search that finds conceptually similar content rather than keyword matches — powered by embeddings.

RLHF

Reinforcement Learning from Human Feedback — the training technique used to align LLMs with human preferences.

Constitutional AI

Anthropic’s technique for training Claude to be helpful, harmless and honest using AI-generated feedback.

Quantization

Compressing model weights (e.g. 16-bit → 4-bit) to reduce memory usage and speed up inference on consumer hardware.

LoRA

Low-Rank Adaptation — efficient fine-tuning technique that trains only small adapter layers, not the full model.

MCP

Model Context Protocol — Anthropic’s open standard for connecting AI models to external data sources and tools.

API

Application Programming Interface — lets software talk to other software. Send prompts to OpenAI/Anthropic servers and receive responses.

CLI

Command Line Interface — text-based way to interact with your computer by typing commands instead of clicking.

Often Misunderstood

AGI

Artificial General Intelligence — hypothetical AI matching or exceeding human cognitive ability. Not achieved yet.

AI vs Machine Learning

ML is a subset of AI using statistical patterns. Deep learning is a subset of ML using neural networks. LLMs are deep learning models.

Sentience / Consciousness

LLMs are not conscious or sentient. They process patterns statistically — they have no feelings or self-awareness.

'The AI will take my job'

AI automates tasks, not jobs. Most roles evolve — the people most at risk refuse to learn to work with AI.

More parameters = smarter

Not always true. Phi-4 (14B) outperforms many larger models. Efficiency, data quality and RLHF matter more than raw size.

AI knows everything up-to-date

LLMs have a training cutoff date and don’t know about recent events unless given tools like web search.

Open Source = Free

Many ‘open’ models have restrictive commercial licenses. Check the specific license before use.