!pip install -q openaiPrompt Engineering vs Context Engineering
A practical guide to crafting effective prompts and assembling rich context windows for LLM applications
Table of Contents
1. Setup & Installation
Install the required packages for working with LLM APIs.
import os
from openai import OpenAI
# Set your API key
# os.environ["OPENAI_API_KEY"] = "your-api-key-here"
client = OpenAI()
print("OpenAI client initialized")2. What is Prompt Engineering?
Prompt engineering is the art of crafting the text input to an LLM to get the desired output. It focuses on what you say to the model.
Core Techniques
| Technique | Description | Example |
|---|---|---|
| Zero-shot | Direct instruction, no examples | “Translate this to French: …” |
| Few-shot | Provide examples before the task | “Q: … A: … Q: … A: … Q: [new]” |
| Chain-of-thought | Ask the model to reason step by step | “Think step by step before answering” |
| Role prompting | Assign a persona or expertise | “You are an expert data scientist…” |
| Output formatting | Specify the output structure | “Respond in JSON with keys: …” |
3. What is Context Engineering?
Context engineering is the discipline of assembling and managing the entire context window — not just the prompt, but all the information the model sees.
The 7 Components of Context
| # | Component | Description |
|---|---|---|
| 1 | Instructions | System prompt defining behavior, rules, and persona |
| 2 | User Prompt | The current user query or request |
| 3 | State / History | Conversation history and session state |
| 4 | Long-term Memory | Persistent facts about the user or domain |
| 5 | Retrieved Info (RAG) | Documents fetched from a vector store or search |
| 6 | Available Tools | Function/tool definitions the model can invoke |
| 7 | Structured Output | Schema or format constraints for the response |
Context engineering treats the context window as a curated information package rather than a simple instruction string.
4. Comparison Table
| Dimension | Prompt Engineering | Context Engineering |
|---|---|---|
| Focus | Crafting the instruction text | Assembling the full context window |
| Nature | Static text optimization | Dynamic information orchestration |
| Scope | Single message or template | System prompt + history + RAG + tools + memory |
| Information | What you write | What the model sees |
| Tools | Not considered | Integrated as callable functions |
| State | Stateless | Manages conversation and session state |
| Complexity | Low to medium | Medium to high |
| Best for | Simple tasks, prototyping | Production agents, complex workflows |
5. RAG-powered Context
Retrieval-Augmented Generation fills the context window with relevant documents from a vector store, giving the model access to up-to-date or domain-specific knowledge.
# Pseudocode: RAG-powered context assembly
# In production, replace with your actual vector store (Chroma, Pinecone, FAISS, etc.)
class MockVectorStore:
"""Simulates a vector store for demonstration."""
def __init__(self):
self.documents = [
"Our return policy allows returns within 30 days of purchase.",
"Shipping is free for orders over $50 within the continental US.",
"Premium members get 20% discount on all products.",
]
def similarity_search(self, query, k=3):
"""Return top-k relevant documents."""
return self.documents[:k]
vector_store = MockVectorStore()
def build_rag_messages(user_query):
"""Assemble messages with RAG context."""
# Retrieve relevant documents
retrieved_docs = vector_store.similarity_search(user_query, k=3)
context = "\n".join(f"- {doc}" for doc in retrieved_docs)
messages = [
{
"role": "system",
"content": (
"You are a helpful customer support agent. "
"Answer based ONLY on the provided context. "
"If the answer is not in the context, say so.\n\n"
f"Context:\n{context}"
),
},
{"role": "user", "content": user_query},
]
return messages
# Example
messages = build_rag_messages("What is your return policy?")
for msg in messages:
print(f"[{msg['role']}]\n{msg['content']}\n")6. Conversation History Management
Effective context engineering manages conversation history to stay within token limits, using truncation or summarization.
def manage_conversation_history(history, max_messages=10, summarize_after=8):
"""Manage conversation history with summarization."""
if len(history) <= max_messages:
return history
# Keep system message + summarize old messages + keep recent
system_msg = history[0] if history[0]["role"] == "system" else None
old_messages = history[1:summarize_after] if system_msg else history[:summarize_after]
recent_messages = history[summarize_after:]
# Create a summary of older messages
old_text = "\n".join(f"{m['role']}: {m['content'][:100]}" for m in old_messages)
summary = {
"role": "system",
"content": f"Summary of earlier conversation:\n{old_text}",
}
managed = []
if system_msg:
managed.append(system_msg)
managed.append(summary)
managed.extend(recent_messages)
return managed
# Example
history = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is Python?"},
{"role": "assistant", "content": "Python is a programming language."},
{"role": "user", "content": "What are its key features?"},
{"role": "assistant", "content": "Key features include readability and versatility."},
{"role": "user", "content": "How about performance?"},
{"role": "assistant", "content": "Python prioritizes readability over raw speed."},
{"role": "user", "content": "What about type hints?"},
{"role": "assistant", "content": "Python supports optional type hints since 3.5."},
{"role": "user", "content": "And async programming?"},
{"role": "assistant", "content": "Python has async/await since 3.5."},
{"role": "user", "content": "Tell me about decorators."},
]
managed = manage_conversation_history(history, max_messages=8, summarize_after=5)
print(f"Original: {len(history)} messages -> Managed: {len(managed)} messages\n")
for msg in managed:
print(f"[{msg['role']}] {msg['content'][:80]}...")7. Tool / Function Definitions
Tools let the model take actions — search, query databases, call APIs. They are a core part of the context window in agentic applications.
import json
# Define tools that the model can call
tools = [
{
"type": "function",
"function": {
"name": "search_docs",
"description": "Search the knowledge base for relevant documents.",
"parameters": {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "The search query to find relevant documents",
},
"top_k": {
"type": "integer",
"description": "Number of results to return (default: 5)",
"default": 5,
},
},
"required": ["query"],
},
},
},
]
print("Tool definitions:")
print(json.dumps(tools, indent=2))8. Dynamic System Prompts
Instead of one-size-fits-all system prompts, context engineering builds task-specific and user-aware prompts dynamically.
def build_system_prompt(task_type, user_profile=None):
"""Build a dynamic system prompt based on task and user context."""
base = "You are a helpful AI assistant."
# Task-specific instructions
task_instructions = {
"code_review": (
"You are a senior software engineer doing code review. "
"Focus on bugs, security issues, and performance. "
"Be constructive and provide specific suggestions."
),
"customer_support": (
"You are a friendly customer support agent. "
"Be empathetic, solution-oriented, and concise. "
"Always offer to escalate if you cannot resolve the issue."
),
"data_analysis": (
"You are a data analyst. "
"Provide insights backed by data. "
"Use tables and bullet points for clarity."
),
}
prompt = task_instructions.get(task_type, base)
# Add user-specific context
if user_profile:
prompt += f"\n\nUser context:\n"
prompt += f"- Name: {user_profile.get('name', 'Unknown')}\n"
prompt += f"- Role: {user_profile.get('role', 'User')}\n"
prompt += f"- Expertise: {user_profile.get('expertise', 'General')}\n"
return prompt
# Examples
for task in ["code_review", "customer_support", "data_analysis"]:
prompt = build_system_prompt(
task, user_profile={"name": "Alice", "role": "Developer", "expertise": "Python"}
)
print(f"--- {task} ---")
print(prompt)
print()9. Few-shot Example Selection
Instead of hard-coding few-shot examples, dynamically select the most relevant examples based on the current query.
# Dynamic few-shot example selection
EXAMPLE_BANK = [
{
"category": "sentiment",
"input": "This product is amazing! Best purchase ever.",
"output": "Positive",
},
{
"category": "sentiment",
"input": "Terrible quality, broke after one day.",
"output": "Negative",
},
{
"category": "classification",
"input": "How do I reset my password?",
"output": "Account Management",
},
{
"category": "classification",
"input": "My order hasn't arrived yet.",
"output": "Shipping & Delivery",
},
{
"category": "extraction",
"input": "John Smith works at Acme Corp as a Senior Engineer.",
"output": '{"name": "John Smith", "company": "Acme Corp", "title": "Senior Engineer"}',
},
]
def select_examples(query, category, k=2):
"""Select relevant few-shot examples.
In production, use vector similarity_search instead of category matching.
"""
matching = [ex for ex in EXAMPLE_BANK if ex["category"] == category]
return matching[:k]
def build_few_shot_messages(query, category):
"""Build messages with dynamically selected few-shot examples."""
examples = select_examples(query, category)
messages = [{"role": "system", "content": f"You are a {category} specialist."}]
# Add examples as conversation history
for ex in examples:
messages.append({"role": "user", "content": ex["input"]})
messages.append({"role": "assistant", "content": ex["output"]})
# Add the actual query
messages.append({"role": "user", "content": query})
return messages
# Example
messages = build_few_shot_messages("The movie was okay, nothing special.", "sentiment")
for msg in messages:
print(f"[{msg['role']}] {msg['content']}")10. When to Use What
Decision Guide
| Scenario | Approach | Why |
|---|---|---|
| Simple Q&A chatbot | Prompt Engineering | A well-crafted system prompt is sufficient |
| Customer support with knowledge base | Context Engineering | Needs RAG, history management, tools |
| Code generation from spec | Prompt Engineering | Focus on clear instructions and examples |
| Multi-step research agent | Context Engineering | Requires tools, memory, dynamic context |
| Sentiment analysis | Prompt Engineering | Few-shot examples and output format |
| Production chatbot with personalization | Context Engineering | User profiles, memory, dynamic prompts |
Key Takeaways
- Prompt engineering is a subset of context engineering — start here
- Context engineering becomes essential when building agents and production systems
- The best LLM applications combine both: well-crafted prompts within a well-engineered context
- Think of the context window as an information architecture problem, not just a text input