#105 🧠 AI Agent Terminology Cheatsheet: 100 Essential Terms Every Product Manager Should Master
AI agents are dominating tech headlines and industry conversations, yet we're only at the surface of understanding and implementing these powerful systems. As product managers, we find ourselves at the frontier of a technology that promises to fundamentally change how users interact with digital products.
Despite all the buzz, many of us are still grappling with what agents actually are, how they differ from traditional AI systems, and what capabilities they truly offer. The terminology around agents can be particularly challenging, with concepts that blend technical capabilities, cognitive science, and system design in novel ways.
This comprehensive cheatsheet equips you with 100 must-know AI agent terms to help you navigate this rapidly evolving landscape. Whether you're evaluating agent platforms, communicating with technical teams these terms will provide the foundation you need to lead with confidence and clarity.
Foundation Models & Core Concepts
Agent - An AI system that perceives environments, makes decisions, and takes actions toward goals
Agentic AI - AI systems capable of autonomous decision-making and action without direct human oversight
LLM (Large Language Model) - Foundation model trained on text data that powers many agent systems
MoE (Mixture of Experts) - Neural network architecture that routes inputs to specialized sub-models
Foundation Model - Pre-trained AI system serving as the base for agent capabilities
Fine-tuning - Process of adapting pre-trained models for specific tasks or domains
Multi-modal - Systems that can process multiple types of inputs (text, images, audio, etc.)
Cognitive Architecture - Framework defining how an AI agent processes information and makes decisions
Agency - The ability to act independently toward objectives
Embodied AI - Agents that exist within and interact with physical or virtual environments
Agent Reasoning & Decision-Making
LRM (Large Reasoning Model) - Models optimized for advanced logical reasoning capabilities
Deliberative Reasoning - Systematic thinking through of consequences before action
CoT (Chain-of-Thought) - Breaking complex reasoning into explicit step-by-step processes
ToT (Tree-of-Thought) - Exploring multiple reasoning pathways simultaneously
ReAct - Framework combining reasoning and action in alternating steps
Planning - Determining sequences of actions to achieve goals
Strategic Reasoning - Long-term decision making considering multiple future scenarios
Tactical Reasoning - Short-term decision making focused on immediate obstacles
Bounded Rationality - Making decisions under constraints of limited information and computing resources
Meta-cognition - Agent's awareness and regulation of its own thinking processes
Memory & Knowledge Systems
Agent Memory - Systems allowing agents to store and retrieve information over time
Working Memory - Short-term storage for immediate task processing
Episodic Memory - Storage of specific experiences or interactions
Semantic Memory - Storage of factual knowledge and general information
Procedural Memory - Storage of action sequences and operational knowledge
Vector Database - Storage system for embedding representations used in agent memory
Knowledge Graph - Structured representation of relationships between entities
Embedding - Dense vector representation of information for efficient processing
Context Window - Amount of information an agent can consider at once
Retrieval - Process of accessing stored information when needed
Tool Use & Interaction
Tool Use - Agent capability to leverage external systems to extend functionality
Function Calling - Mechanism for agents to execute code or API calls
Tool Selection - Process of choosing appropriate tools for specific tasks
API Integration - Connecting agents to external services via application programming interfaces
Structured Output - Generating responses in specific formats (JSON, XML, etc.)
MCP (Model Connection Protocol) - Standard for connecting AI models with external tools
Sandbox - Controlled environment for testing agent actions safely
Effectuation - Process by which agents convert decisions into actions
Action Space - Complete set of possible actions available to an agent
Tool Library - Collection of functions and APIs available for agent use
Multi-Agent Systems
Multi-Agent System - Environment where multiple AI agents interact and collaborate
Agent Communication - Protocols for information exchange between agents
Collaborative Agents - Agents designed to work together toward shared goals
Competitive Agents - Agents pursuing individual objectives in shared environments
Swarm Intelligence - Collective behavior emerging from decentralized, self-organized agents
Orchestration - Coordination of multiple agents in complex workflows
Agent Specialization - Optimization of agents for specific tasks or domains
Role Assignment - Process of determining which agent handles which tasks
Consensus Mechanisms - Methods for agents to reach agreement on decisions
Emergent Behavior - Complex patterns arising from interactions of simpler agents
Agent Architecture & Design
Workflow - Predefined sequence of operations for agents to follow
Agent Pipeline - Connected sequence of processing steps in agent systems
Modular Architecture - Design approach using interchangeable components
Hierarchical Agents - Multi-level systems with different layers of abstraction
Reactive Agents - Simple agents that respond directly to environmental stimuli
BDI (Belief-Desire-Intention) - Framework modeling agents based on mental attitudes
Actor Model - Computational model treating agents as concurrent computational entities
Black Box - Components whose internal workings are not transparent
White Box - Components with transparent internal operations
Agent Topology - Structural arrangement of agent components and connections
Learning & Adaptation
Reinforcement Learning - Learning through trial and error with reward signals
RLHF (Reinforcement Learning from Human Feedback) - Using human evaluations to guide learning
DPO (Direct Preference Optimization) - Learning directly from preference comparisons
Constitutional AI - Systems with built-in constraints based on principles
Self-Improvement - Agent capability to enhance its own performance
Transfer Learning - Applying knowledge from one domain to another
Curriculum Learning - Training agents with progressively more difficult tasks
Imitation Learning - Learning by observing and mimicking example behaviors
Meta-Learning - Learning how to learn more efficiently
Adaptive Agents - Systems that modify strategies based on environmental changes
Evaluation & Alignment
Alignment - Ensuring agent behavior matches human values and intentions
HITL (Human-in-the-Loop) - Systems incorporating human oversight and intervention
Reward Function - Specification of what constitutes success for an agent
Oversight - Monitoring and verification of agent actions
Benchmarking - Standardized testing of agent capabilities
Evaluation Harness - Framework for assessing agent performance
Interpretability - How understandable agent decisions are to humans
Explainability - Agent ability to articulate reasoning behind decisions
Safety Layer - Protective mechanisms preventing harmful actions
Value Alignment - Matching agent objectives with human ethical principles
Specialized Capabilities
Vertical Agents - Agents optimized for specific industries or domains
RAG (Retrieval-Augmented Generation) - Enhancing outputs with retrieved information
Agentic RAG - Systems where agents dynamically decide when and how to retrieve information
Self-Healing - Ability to detect and correct errors autonomously
Reflection - Agent evaluation of its own performance and decision making
Overthinking - Inefficient over-analysis of simple problems
Self-Critique - Agent evaluation of its own outputs for quality
Hallucination Mitigation - Techniques to reduce false or unfounded outputs
Uncertainty Quantification - Agent assessment of confidence in its conclusions
Planning Horizon - How far into the future an agent considers consequences
Advanced Concepts & Future Directions
AGI (Artificial General Intelligence) - Hypothetical AI with human-like general intelligence
Recursive Self-Improvement - Agents enhancing their own capabilities repeatedly
Agentic Workflows - Dynamic process management by autonomous systems
Cognitive Simulation - Modeling human-like thinking processes in agents
Homeostatic Agents - Systems maintaining internal stability despite external changes
Counterfactual Reasoning - Considering alternative scenarios that didn't occur
Bounded Agency - Operating with defined constraints on autonomy
Value Handshake - Negotiation between human and agent values and preferences
Corrigibility - Agent willingness to be corrected or modified by humans
Digital Mind - Philosophical concept of consciousness in advanced agent systems