Glossary
Terms you’ll encounter in the lessons. Click any underlined term in the text to jump here.
Actions
Section titled “Actions”API calls that a Custom GPT can make to external services — configured via an OpenAPI specification. Enables real tool integration, e.g., fetching data or triggering operations.
Appears in: L3 — Custom GPTs
Agentic AI
Section titled “Agentic AI”AI systems that can independently execute multi-step tasks — they plan steps, use tools, and make decisions without the human directing every move.
Appears in: L4 — From Chat to Delegation
Automation Bias
Section titled “Automation Bias”The tendency to trust automated systems more than your own judgment — even when there are signs the system is wrong. The EU AI Act recognizes automation bias as an explicit risk.
Appears in: L4 — Calibrating Trust
Chain-of-Thought
Section titled “Chain-of-Thought”A technique that asks the AI to show its reasoning step by step before giving an answer. Improves accuracy on complex tasks.
Appears in: L2 — Prompting Techniques
ChatGPT Agent Mode
Section titled “ChatGPT Agent Mode”OpenAI’s agentic feature in ChatGPT that combines a visual browser, code execution, file editing, and app integrations. The agent works in the cloud and can independently execute multi-step tasks on the web.
Appears in: L4 — ChatGPT Agent Mode
Claude Cowork
Section titled “Claude Cowork”Anthropic’s desktop agent feature in the Claude Desktop app. Cowork can directly access local files, independently execute multi-step tasks, create documents, and interact with external services through connectors.
Appears in: L4 — Claude Cowork
Claude Projects
Section titled “Claude Projects”Persistent workspaces in Claude.ai that bundle a knowledge base (uploaded documents), custom instructions, and chat history in one place. Every chat within a project automatically has access to the full project context.
Appears in: L3 — Claude Projects
Connectors
Section titled “Connectors”Integrations that connect Claude to external services through the Model Context Protocol (MCP) — Gmail, Google Drive, Slack, and others. Claude can search and retrieve data without leaving the Cowork interface.
Appears in: L4 — Claude Cowork
Context Window
Section titled “Context Window”The maximum amount of text an AI model can process at once — its short-term memory. Everything within this window influences the response. What doesn’t fit, the AI can’t see.
Appears in: L1 — What Is AI?, L2, L3
Custom GPT
Section titled “Custom GPT”A customizable version of ChatGPT with your own instructions and knowledge files — no coding required.
Appears in: L1 — Tools Overview
Custom GPTs
Section titled “Custom GPTs”Specialized versions of ChatGPT configured with custom instructions, knowledge files, and optional API actions. They can be used privately, shared via link, or published in the GPT Store.
Appears in: L3 — Custom GPTs
EU AI Act
Section titled “EU AI Act”The world’s first comprehensive regulation for artificial intelligence (Regulation (EU) 2024/1689). Classifies AI systems by risk and mandates transparency, human control, and accountability.
Appears in: L4 — Compliance Basics
Few-Shot
Section titled “Few-Shot”A technique where you show the AI 2–5 examples with input and output before presenting the actual task. Especially effective for controlling format and style.
Appears in: L2 — Prompting Techniques
Generative AI
Section titled “Generative AI”AI systems that can create new content — text, images, code, music — rather than just searching or classifying existing information.
Appears in: L1 — What Is AI?, L1 — Do’s and Don’ts
Hallucination
Section titled “Hallucination”An AI response that is false or fabricated but sounds confident and convincing. Not a rare glitch but a systemic feature — AI optimizes for plausibility, not truth.
Appears in: L1 — First Conversations, L1 — Understanding Limits
Human-in-the-Loop
Section titled “Human-in-the-Loop”The principle that a human remains involved in an AI system’s decision process — whether as approver, supervisor, or final decision-maker. The EU AI Act mandates this for high-risk AI systems.
Appears in: L4 — Compliance Basics
Knowledge Cutoff
Section titled “Knowledge Cutoff”The date up to which an AI model’s training data reaches. It doesn’t know about events after this date from its own knowledge.
Appears in: L1 — Understanding Limits
Large Language Model — a neural network trained on massive amounts of text that can generate new text. The foundation behind ChatGPT, Claude, Gemini, and other AI chatbots.
Appears in: L1 — What Is AI?, L1 — Tools Overview
Microsoft 365 Copilot
Section titled “Microsoft 365 Copilot”An AI assistant integrated directly into Microsoft 365 apps (Word, Excel, PowerPoint, Outlook, Teams) that accesses organizational data through the Microsoft Graph.
Appears in: L3 — M365 Copilot
Microsoft Graph
Section titled “Microsoft Graph”Microsoft’s data layer that connects information across all Microsoft 365 services — emails, calendars, files, Teams chats, SharePoint documents. Copilot uses the Graph to ground responses in your work context.
Appears in: L3 — M365 Copilot
Persistent Context
Section titled “Persistent Context”Information that persists beyond a single conversation — roles, rules, knowledge, preferences — and automatically flows into every new interaction.
Appears in: L3 — Persistent Context
Prompt
Section titled “Prompt”The input you send to an AI model — your question, instruction, or task. The more precise the prompt, the better the result.
Retrieval Augmented Generation — a technique where the AI model retrieves relevant information from a knowledge base before responding. This extends capacity beyond the context window.
Appears in: L3 — Claude Projects
Role Prompting
Section titled “Role Prompting”A technique where you assign the AI a specific role or expertise to shape the quality and perspective of its response.
Appears in: L2 — Prompt Anatomy
System Prompt
Section titled “System Prompt”A hidden instruction that sets the AI’s behavior for the entire conversation — it’s set before the actual conversation starts and is typically invisible to the user.
Appears in: L2 — Prompt Anatomy, L3 — System Prompt Design
The smallest text units an AI model processes — usually a word or part of a word. A typical English sentence has 10–20 tokens.
Appears in: L1 — What Is AI?
User Prompt
Section titled “User Prompt”The visible input you send directly to the AI — your question, task, or instruction in the chat. Unlike the system prompt, the user prompt is visible to you and under your control.
Appears in: L2 — Prompt Anatomy
Zero-Shot
Section titled “Zero-Shot”An instruction to the AI without any examples — the AI relies entirely on its pre-existing knowledge to complete the task. The simplest form of prompting.
Appears in: L2 — Prompting Techniques