▸ DICT // PLAIN ENGLISH

Glossary.

Every buzzword in agentic AI translated into plain English. 26 terms and growing.

26 terms

A

Agent

Software that can plan, call tools, and act on a goal — not just respond. The keyword is "loop": observe → plan → act → observe.

Agentic

Software that does things on your behalf, often via tool calls and multi-step loops. Marketing uses it loosely; engineers mean: LLM + tools + loop.

Apps SDK

OpenAI's framework for building apps inside ChatGPT. Uses MCP-style servers as the tool/app interface and adds ChatGPT-specific UI/widget rendering.

A2A

Agent-to-Agent protocol from Google. Lets agents discover and delegate to each other. Useful for multi-agent systems.

C

Chain-of-thought

Asking the model to "show its working" before answering. Improves reasoning on complex tasks at the cost of tokens.

Context window

How much text the model can "see" at once. Bigger ≠ always better — there's a "lost in the middle" problem.

Copilot Studio

Microsoft's low-code platform for building agents inside M365 / Power Platform. Tightly integrated into corporate IT estate.

D

Declarative agent

An agent defined by a JSON manifest (no code). Microsoft's pattern for M365 Copilot extensibility.

E

Embedding

A vector representation of text. Used for semantic search, similarity, and RAG. "Closer in vector space" ≈ "more similar in meaning".

F

Function calling

When an LLM produces structured JSON describing which function to invoke. The mechanism behind tool calling.

H

Hallucination

When a model confidently produces wrong information. Most-cited LLM failure mode. Reduced — not eliminated — by RAG, grounding, and tool calls.

J

JSON Mode

Constrained-generation mode where the model only produces valid JSON. Useful when you need a parseable response.

L

Long context

100K+ tokens, sometimes millions. Doesn't mean the model uses it well — see "lost in the middle".

Lost in the middle

Phenomenon where models pay less attention to information in the middle of long contexts. Strong performance at start + end.

M

MCP

Model Context Protocol — Anthropic's open spec for connecting AI agents to tools, data, and apps. The most-likely-to-stick tool protocol of 2026.

MoE

Mixture of Experts — model architecture where only some "expert" sub-networks fire per token. Powers Mixtral, GPT-4, etc.

Multi-agent

Two or more agents collaborating, often with role specialisation (planner / worker / reviewer). Useful when, but not always.

O

Orchestration

The coordination layer that decides which agent runs when, with what input. Examples: LangGraph, CrewAI, AutoGen.

P

Prompt injection

When attacker text overrides the system prompt. Direct (in user input) or indirect (via fetched content). The #1 agent security concern.

R

RAG

Retrieval-Augmented Generation — fetch relevant docs before answering. Reduces hallucinations on factual questions.

ReAct

Reasoning + Acting — agent loop pattern: thought → action → observation → thought... Most common modern agent loop.

S

Skill

Anthropic's name for a packaged set of agent capabilities (instructions + files + tools). Marketplace launched 2025.

System prompt

The "operating instructions" given to the model, hidden from end users, sets behaviour and constraints.

T

Tool calling

When an LLM decides which tool to use, generates arguments, the runtime executes it, and the result is fed back. Foundation of agentic behaviour.

Token

Unit of text the model sees. Roughly 0.75 of a word in English. Pricing and context limits are measured in tokens.

V

Vector DB

Database optimised for similarity search over embeddings. Examples: Pinecone, Qdrant, Weaviate, pgvector.