Skip to main content

Why Context Matters

Context is the foundation of agent intelligence. The quality and structure of information you provide directly impacts:
  • Decision accuracy - Better context = better decisions
  • Token efficiency - Optimized context = lower costs
  • Response speed - Focused context = faster processing

Context Feeding Strategies

Progressive Disclosure

Start with minimal context, expand only when needed:
# Level 1: Task Context (always include)
- Current objective
- Immediate constraints
- Expected output format

# Level 2: Domain Context (include when relevant)
- Project conventions
- Team standards
- Historical decisions

# Level 3: Deep Context (include on-demand)
- Full codebase structure
- Complete documentation
- External references

Structured Context Files

Use project instruction files for persistent context:
project/
├── CLAUDE.md           # Project-level context
├── src/
│   └── CLAUDE.md       # Module-specific context
└── .agents/
    └── memory/         # Dynamic agent memory

Context Hierarchy

  1. System context - Model capabilities, safety guidelines
  2. User context - Personal preferences (global CLAUDE.md)
  3. Project context - Codebase rules (project CLAUDE.md)
  4. Task context - Current objective, constraints
  5. Dynamic context - Retrieved knowledge, memory queries

Optimization Techniques

Minimize Redundant Reads

# Bad: Reading the same file multiple times
Read file.ts
# ... do something ...
Read file.ts  # Redundant!

# Good: Read once, reference in context
Read file.ts
# Continue working with the content already in context

Use Targeted Queries

# Bad: Read entire codebase
Read **/*.ts

# Good: Query specific patterns
Grep "export function" --type ts
Glob "src/api/**/*.ts"

Leverage Memory

# Check existing knowledge before researching
squads memory query "authentication patterns"

# Update memory after discoveries
squads memory update engineering

Context Window Management

Monitor Usage

The context window has limits. Track usage:
ModelContext WindowRecommended Max
Claude Sonnet200K tokens~150K tokens
Claude Opus200K tokens~150K tokens
Claude Haiku200K tokens~150K tokens

When Context Gets Large

  1. Summarize - Compress verbose content
  2. Prune - Remove irrelevant sections
  3. Externalize - Store in memory, reference by key
  4. Delegate - Spawn sub-agents with focused context

Anti-Patterns

Avoid these common mistakes:
  • Loading entire repositories into context
  • Repeating instructions already in CLAUDE.md
  • Including raw API responses without filtering
  • Keeping stale context from previous tasks

Best Practices Checklist

  • Use CLAUDE.md for persistent project context
  • Query memory before starting research
  • Read files once, not repeatedly
  • Use Glob/Grep for targeted searches
  • Prune context when switching tasks
  • Update memory after significant discoveries