AI Glossary
AI Glossary
Key Terms Every Executive Should Know
Understanding AI requires familiarity with key technical terms. This glossary provides clear, business-friendly definitions of the most important concepts mentioned in the AI Learning Series.
Artificial Intelligence (AI)
A broad field of computer science focused on building systems that can simulate human intelligence by learning from data, recognizing patterns, and making decisions.
Large Language Model (LLM)
A type of AI trained on vast amounts of text data to generate human-like responses. LLMs power AI tools like ChatGPT, Claude, and Gemini.
Tokens
The smallest unit of text AI models process, including words, parts of words, punctuation, and spaces. AI models use tokens to predict the next part of a response.
Example: The phrase “AI is powerful” is three tokens ([“AI”, “is”, “powerful”]).
Context Window
The amount of text (measured in tokens) an AI model can “see” at once before earlier parts are forgotten.
- GPT-3.5: 4,096 tokens (~3,000 words).
- Claude 2: 200,000 tokens (~150,000 words).
🚨 When a conversation exceeds the context window, AI forgets earlier parts.
Inference
The process of an AI model generating a response based on a prompt.
Example: When you ask ChatGPT a question, it runs an inference to generate an answer.
🚨 Inference requires computing power, which is why running AI models locally can be expensive.
Training (AI Model Training)
The process of teaching an AI model by feeding it large datasets so it can learn patterns, relationships, and context.
- Pretraining: AI is trained on massive datasets (books, articles, websites).
- Fine-tuning: AI is trained on specific company data to customize responses.
🚨 Training requires powerful GPUs and weeks (or months) of computing time.
Embeddings
A way AI stores and retrieves knowledge by converting words into numerical representations.
Use case: AI-powered search tools retrieve relevant business documents using embeddings instead of keyword matching.
🚨 Embeddings allow AI to “recall” company-specific data when needed.
APIs (Application Programming Interfaces)
A method for connecting AI models to external data, software, or tools.
Example: A chatbot connected via API to a CRM can retrieve customer history in real-time.
🚨 APIs allow AI to interact with enterprise systems instead of relying only on pre-trained data.
Plugins
Software extensions that allow AI to perform actions beyond text generation, like:
✔️ Scheduling meetings.
✔️ Running database queries.
✔️ Accessing company reports.
🚨 AI itself doesn’t “do” things—plugins allow it to trigger actions.
Fine-Tuning
The process of retraining an AI model on domain-specific data to improve its accuracy for a business.
Example: Fine-tuning an LLM with legal documents to create an AI-powered contract review tool.
🚨 Fine-tuning requires additional training but improves AI performance for niche use cases.
Prompt Engineering
The skill of writing effective instructions to control AI’s responses.
Example:
- Bad prompt: “Summarize this report.”
- Good prompt: “Summarize this report in 5 key points, focusing on risk and market trends.”
🚨 Well-crafted prompts improve AI accuracy and reduce hallucinations.
Hallucinations (AI Hallucinations)
When AI generates false or misleading information that sounds convincing but is inaccurate.
Example: AI inventing fake statistics or non-existent research papers.
🚨 AI doesn’t fact-check—it predicts responses based on patterns, not truth.
Structured vs. Unstructured Data
- Structured Data → Organized in a database, easy for AI to process (e.g., spreadsheets, SQL tables).
- Unstructured Data → Free-form content, harder for AI to interpret (e.g., emails, PDFs, video transcripts).
🚨 AI performs better when data is well-structured and labeled.
Vector Databases
A specialized database for storing and searching AI embeddings (numerical representations of data).
Use case: AI-powered search systems retrieve semantically relevant results instead of just keyword matches.
🚨 Vector databases improve AI memory and recall efficiency.
Observability (AI Observability)
The practice of monitoring AI system performance, ensuring it runs efficiently and produces reliable outputs.
🚨 AI must be monitored for bias, drift, and unexpected behaviors over time.
Bias in AI
When AI models produce unfair or inaccurate outputs due to biases in the training data.
Example: An AI hiring model favoring one demographic over another due to biased training data.
🚨 Bias is an ongoing challenge—AI must be tested and audited for fairness.
GPUs (Graphics Processing Units)
Specialized processors designed for handling AI computations.
✔️ Used for training and inference in AI models.
✔️ Faster than CPUs for processing large datasets.
✔️ Expensive but essential for running AI locally.
🚨 AI models require significant GPU power to function efficiently.
Cloud AI vs. On-Premises AI
- Cloud AI → Hosted by providers like OpenAI, Google, AWS (easier but expensive).
- On-Premises AI → Runs on a company’s private infrastructure (more control, but requires GPUs).
🚨 Enterprises must choose between cloud convenience and full control over AI deployment.
Explainability (AI Explainability)
The ability to understand and interpret how AI makes decisions.
Why it matters:
✔️ Ensures compliance with regulations (GDPR, AI Act).
✔️ Builds trust in AI-driven decision-making.
✔️ Reduces the risk of unexpected AI errors or biases.
🚨 Black-box AI models (where decisions are unclear) pose compliance and ethical risks.
AI Governance
A framework for ensuring AI is used responsibly, ethically, and legally in an organization.
✔️ Establishes internal AI policies.
✔️ Ensures compliance with global regulations.
✔️ Prevents AI risks such as bias, security flaws, or misuse.
🚨 AI governance is critical for risk management and long-term business sustainability.
This glossary provides executives with a high-level understanding of AI terminology to guide discussions with their teams, vendors, and stakeholders.