← Back to amiteshray.com Study Hub · Claude Code Architect Exam
✦ Anthropic Certified

Claude Certified Architect — Foundations

Exam Prep Dashboard  ·  301-Level  ·  Launched March 12, 2026  ·  Official Exam Guide v0.1 + Architect's Playbook

60
MCQ Questions
720
Passing Score
5
Domains
6
Scenarios
28
Task Statements
$99
Exam Fee

About the Exam

Format & Scoring

  • 301-level exam — 60 MCQ, 4 options each, 120 minutes
  • Scaled score 100–1000  ·  Passing: 720
  • 4 of 6 scenarios randomly selected per sitting
  • All questions grounded in real-world production scenario contexts
  • Launched March 12, 2026  ·  Exam Guide v0.1  ·  $99 USD
  • Strictly proctored — no Claude, no docs, no external tools allowed

Target Candidate & Prerequisites

Architects and senior engineers who design, implement, and deploy production AI systems using the Claude API, Claude Code, and MCP integrations. Expected: 6+ months hands-on Claude API & Claude Code experience, Python familiarity, and real project work.

  • Experienced AI developers: 2–4 weeks to prepare
  • Developers new to Claude: 2–4 months (build real projects first)

🔥 What Makes This Exam Unlike Any Other

This is not a prompting literacy badge. It doesn't care how many prompts you've written. Every question drops you into a real production system and asks you to make the right architectural call.

  • Scenario-anchored: you answer as the architect of a specific production system, not in the abstract
  • Anti-patterns as distractors: wrong answers are architectural mistakes engineers commonly reach for. Knowing what NOT to do is half the exam.
  • Community verdict: "The depth on agentic architecture, MCP tool integration, and multi-agent orchestration is no joke. This isn't a watch-a-tutorial-and-pass certification."

The Core Exam Principle

"Code guarantees. Prompts suggest."

Programmatic enforcement — app-layer intercepts, schema-validated tool_use, structured error handling — always trumps prompt-based guidance for reliability and compliance. This principle underlies the answer to most hard questions.

Domain Weightings

D1 · Agentic Architecture & Orchestration27%
D3 · Claude Code Configuration & Workflows20%
D4 · Prompt Engineering & Structured Output20%
D2 · Tool Design & MCP Integration18%
D5 · Context Management & Reliability15%

Official Prep Steps

  1. Review the official exam guide thoroughly
  2. Build a working agentic loop from scratch
  3. Create an MCP server with all three primitives
  4. Configure Claude Code with CLAUDE.md & rules
  5. Practice structured output with tool_use schemas
  6. Study all Architect's Playbook patterns
  7. Complete all 4 official preparation exercises
  8. Review the 12 sample questions & explanations

⚡ Critical Implementation Rules high-probability exam traps

Tool Count Rule

Give each agent a maximum of 4–5 tools. With 18+ tools, Claude's tool selection reliability degrades noticeably. Scope tools tightly to each agent's specific role.

Hub-and-Spoke Architecture

The canonical multi-agent pattern: a central coordinator + specialized subagents. The hub routes tasks, subagents return structured results. Never let subagents talk to each other directly.

Token Economics

Isolate context for each subagent — pass only what's needed for that task. Shared context leakage across subagents is the #1 cause of token bloat in multi-agent systems.

Confidence Calibration

Assign a confidence score to outputs and define threshold triggers for human escalation. Don't rely on Claude self-reporting uncertainty — measure it structurally.

Validation Retry Loop

When structured output fails schema validation, feed the error back to Claude with the validation message as a new user turn. One retry with explicit error context succeeds far more often than a blank retry.

Exam Is Strictly Proctored

No Claude, no docs, no external tools. Everything from memory. Build real implementations — rote reading alone won't cut it.

Subagent Context Isolation

Subagents have isolated context — they do NOT inherit the coordinator's conversation history automatically. Pass only what's needed for that specific goal. This is one of the most commonly missed facts on the exam.

Same-Session Self-Review = Anti-Pattern

Never use the same Claude session to review its own work. Use a separate session (fork_session) for independent review. The same session has anchoring bias toward its own output.

PostToolUse Normalizes Formats

PostToolUse hooks intercept tool results before the model processes them. Use to normalize heterogeneous formats (timestamps, status codes, currency) from different MCP tools into a consistent schema.

The 6 Exam Scenarios

Four of these six are randomly selected per sitting. Every exam question is anchored to one of these real-world contexts.

01

Customer Support Resolution Agent

Agentic loop, tool use, escalation decisions, and human handoff patterns.

02

Code Generation with Claude Code

Claude Code CLI, CLAUDE.md config, custom commands, CI integration.

03

Multi-Agent Research System

Orchestrator + specialist subagents, context handoff, shared memory.

04

Developer Productivity with Claude

MCP server design, tool interfaces, and developer workflow optimization.

05

Claude Code for CI

CI/CD integration, headless mode (--print), JSON output, schema validation.

06

Structured Data Extraction

tool_use for schema enforcement, batch processing, resilient schemas.

In-Scope Topics

  • Agentic loop implementation & stop_reason handling
  • Multi-agent orchestration patterns
  • Subagent context management & handoff
  • Tool interface design & JSON schema authoring
  • MCP tools, resources & prompts primitives
  • MCP server config (.mcp.json / ~/.claude.json)
  • Error handling, isError flag, retry strategies
  • Escalation & human-in-the-loop decision making
  • CLAUDE.md configuration hierarchy
  • Custom commands & skills in Claude Code
  • Plan mode vs direct execution
  • Structured output via tool_use
  • Few-shot prompting & prompt engineering
  • Batch processing (Message Batches API)
  • Context window optimization
  • Information provenance & human review workflows

Out-of-Scope Topics

  • Fine-tuning or model training
  • API authentication & billing details
  • Infrastructure deployment & cloud provider configs
  • Constitutional AI internals
  • Embedding models
  • Computer use & vision APIs
  • Streaming & SSE implementation details
  • Rate limiting strategies
  • OAuth flows
  • Prompt caching implementation details

Domain 1 · Agentic Architecture & Orchestration 27%

1.1 Design and implement agentic loops using the Claude API

Knowledge of
  • stop_reason: "tool_use" → execute tools and continue; "end_turn" → exit loop
  • Agentic loop structure: send → check stop_reason → execute tool → append result → repeat
  • Message array construction for multi-turn tool conversations
  • tool_use_id must exactly match between the tool_use block and the tool_result response
Skills in
  • Implementing a while-loop that runs until stop_reason == "end_turn"
  • Parsing tool_use blocks from Claude's response and routing to handlers
  • Appending tool_result messages with the correct tool_use_id

1.2 Implement multi-agent orchestration patterns

Knowledge of
  • Orchestrator–subagent: parent delegates goals, children execute and return structured results
  • Task tool in Claude Agent SDK for spawning subagents
  • fork_session for parallel subagent execution
  • AgentDefinition for reusable agent configurations
Skills in
  • Designing orchestration hierarchies with clear responsibility boundaries
  • Passing structured context payloads from orchestrator to subagents
  • Preventing context explosion via Goal-Oriented Delegation (goals, not implementation details)

1.3 Design lifecycle hooks for agentic workflows

Knowledge of
  • PostToolUse hook: fires after every tool execution — use for logging and audit
  • PreToolUse hook: intercepts before execution — use for compliance enforcement
Skills in
  • Implementing PostToolUse for audit trails and observability
  • Using PreToolUse to block prohibited actions at the application layer

1.4 Implement escalation and human-in-the-loop decision patterns

Knowledge of
  • Escalation triggers: irreversible actions, ambiguous intent, compliance boundaries, low confidence
  • Structured escalation: include context, reason, and recommended next action
  • Human review workflows: approval queues, audit logs, override mechanisms
Skills in
  • Designing escalation triggers based on action type and risk level
  • Surfacing actionable context to human reviewers
  • Calibrating automation vs human oversight for different scenarios

1.5 Design error handling and retry strategies

Knowledge of
  • Error categories: transient (retryable) vs permanent (non-retryable)
  • Exponential backoff with jitter for transient errors — prevents retry storms
  • Maximum retry limits to prevent infinite loops
  • Limits of automated retry — when to escalate to humans
Skills in
  • Implementing retry logic with exponential backoff and max bounds
  • Classifying errors and propagating structured error context through agent chains

1.6 Manage subagent context and information handoff

Knowledge of
  • Context window limits and their impact on multi-agent systems
  • Information provenance: tracking where facts originated
Skills in
  • Designing minimal, structured context payloads for subagents
  • Summarizing completed subtask results before injecting into parent context

1.7 Apply allowedTools and AgentDefinition

Knowledge of
  • allowedTools: array of tool names the agent is permitted to call (principle of least privilege)
  • AgentDefinition: reusable config object — system prompt, tools, model
Skills in
  • Scoping agent capabilities via AgentDefinition
  • Restricting subagent tool access to only what's necessary

1.8 Design compliance enforcement in agentic systems

Knowledge of
  • Zero-Tolerance Compliance Pattern: application-layer intercepts over prompt instructions
  • "Code guarantees. Prompts suggest." — programmatic vs prompt-based enforcement
Skills in
  • Implementing pre/post execution intercepts for deterministic compliance
  • Designing allow/deny lists enforced at the application layer

1.9 Implement asynchronous and resumable sessions

Knowledge of
  • --resume flag in Claude Code for continuing paused sessions
  • Filtering stale tool_results when resuming async sessions
Skills in
  • Designing sessions that can be safely paused and resumed
  • Identifying and removing stale tool results on resume

Domain 2 · Tool Design & MCP Integration 18%

2.1 Design MCP tool interfaces with JSON schemas

Knowledge of
  • MCP primitives: Tools (actions), Resources (data), Prompts (templates)
  • Tool definition: name, description, inputSchema (JSON Schema)
  • isError flag — signals error while keeping the agentic loop alive
Skills in
  • Writing precise JSON Schema for tool input parameters
  • Crafting tool descriptions that guide Claude's selection
  • Implementing isError: true responses for recoverable failures
  • Knowing when to use Tools vs Resources vs Prompts

2.2 Configure MCP server connections

Knowledge of
  • .mcp.json: project-level MCP server config (committed to repo)
  • ~/.claude.json: user-level MCP server config (personal tools)
Skills in
  • Writing .mcp.json with correct server definitions and env var references
  • Choosing project-level vs user-level config based on audience

2.3 Implement structured error handling in MCP tools

Knowledge of
  • isError: true keeps the loop alive; the exception/throw pattern terminates it
  • errorCategory for classifying error types; isRetryable for retry guidance
Skills in
  • Returning structured error responses with classification metadata Claude can act on

2.4 Control tool invocation with tool_choice

Knowledge of
  • "auto" — Claude decides; "any" — must use some tool; {type:"tool",name:"X"} — forces specific tool
Skills in
  • Forcing tool execution order via sequential tool_choice per step
  • Using "any" to guarantee structured output via tool_use

2.5 & 2.6 Design MCP Resources and Prompts

Knowledge of
  • Resources: read-only URI-based data sources (knowledge bases, files, structured data)
  • Prompts: parameterized reusable templates for standardized workflows
Skills in
  • Choosing resource vs tool for different data access patterns
  • Designing reusable prompt templates with parameterization

Domain 3 · Claude Code Configuration & Workflows 20%

3.1 Configure CLAUDE.md hierarchy

Knowledge of
  • User-level: ~/.claude/CLAUDE.md — applies to all projects
  • Project-level: .claude/CLAUDE.md — applies to the project (committed to repo)
  • Directory-level: subdirectory CLAUDE.md — overrides everything above
  • Precedence: more specific (directory) overrides less specific (user)
Skills in
  • Writing effective CLAUDE.md files with architecture context and conventions
  • Using /memory to view and manage active context

3.2 Create custom commands in Claude Code

Knowledge of
  • Project commands: .claude/commands/*.md; User commands: ~/.claude/commands/*.md
  • Commands are Markdown files; use $ARGUMENTS placeholder for parameters
Skills in
  • Writing command files for repetitive workflows (e.g., /review, /deploy)

3.3 Configure .claude/rules/ for context-aware automation

Knowledge of
  • Rules: YAML frontmatter with glob patterns — activate automatically when matching files are in context
  • More targeted than CLAUDE.md for directory- or file-type-specific behaviors
Skills in
  • Writing rules with glob patterns for specific file types or directories

3.4 Implement Claude Code skills with SKILL.md

Knowledge of
  • Skills: .claude/skills/ directory with SKILL.md files
  • Frontmatter fields: context: fork (isolated session) or context: current (shared session)
  • allowed-tools and argument-hint frontmatter fields
Skills in
  • Choosing fork vs current context based on isolation needs

3.5 Use Claude Code CLI flags for CI/CD

Knowledge of
  • -p / --print: headless mode — single response, no interactive UI
  • --output-format json: structured JSON output for scripting
  • --json-schema: validate output against a JSON schema
Skills in
  • Scripting Claude Code invocations inside CI pipelines

3.6 Apply plan mode and iterative refinement

Knowledge of
  • Plan mode: Claude proposes plan, waits for approval before executing — use for risky ops
  • /compact: compresses conversation history to extend effective session length
Skills in
  • Choosing plan mode for complex or irreversible operations
  • Using /compact strategically at milestone boundaries

Domain 4 · Prompt Engineering & Structured Output 20%

4.1 Design prompts that produce reliable structured output

Knowledge of
  • tool_use for schema-enforced structured output — the strongest guarantee
  • JSON Schema for defining output structure in tool definitions
Skills in
  • Defining tool schemas that map exactly to the desired output structure
  • Using the tool_use extraction pattern to guarantee schema compliance

4.2 Apply advanced prompting techniques

Knowledge of
  • Chain-of-thought: explicit step-by-step reasoning before final answer
  • Scratchpad Pattern: internal reasoning separate from final output (<thinking> tags)
  • Structured Intermediate Representations: typed objects between pipeline stages
Skills in
  • Separating internal reasoning from output to keep responses clean and structured

4.3 Design resilient output schemas

Knowledge of
  • Resilient schema: always add "other" enum value + detail field for unclassified inputs
  • Mathematical consistency checks: calculated_total vs stated_total
  • Explicit null vs absent vs empty string handling and normalization rules
Skills in
  • Designing schemas that handle unexpected input without failing
  • Adding consistency validation fields to catch extraction errors

4.4 Implement batch processing with Message Batches API

Knowledge of
  • Message Batches API: 50% cost savings for large-scale non-interactive workloads
  • 24-hour processing window  ·  custom_id field for matching results to inputs
  • Limitation: no multi-turn tool calling in batch mode
Skills in
  • Constructing batch payloads with custom_id and polling for results

Domain 5 · Context Management & Reliability 15%

5.1 Manage context window constraints

Knowledge of
  • Tool Context Pruning: replace verbose tool outputs with compact summaries
  • Compressing Long Sessions: summarize at natural context boundaries
  • /compact command for mid-session compression
Skills in
  • Identifying when context compression is needed and applying it proactively

5.2 Design shared memory for multi-agent systems

Knowledge of
  • Shared Memory Architecture: vector store or KV store accessible to all agents
  • Memory types: episodic (events), semantic (facts), procedural (workflows)
Skills in
  • Designing vector store schemas and coordinating read/write across concurrent agents

5.3 Implement parallelization and caching

Knowledge of
  • Parallelization: run independent subtasks concurrently to reduce latency
  • Caching: store deterministic tool results to avoid redundant API calls
Skills in
  • Identifying independent subtasks for parallel execution
  • Implementing result caching with appropriate TTL and invalidation strategy

Architect's Playbook Patterns

20+ architectural patterns from The Architect's Playbook. Each shows the anti-pattern and the correct solution. The wrong answers on the exam ARE these anti-patterns — recognising them lets you eliminate 2–3 options immediately.

Core Principle: "Code guarantees. Prompts suggest." — Always prefer programmatic enforcement over prompt-based guidance for compliance, reliability, and consistency.
Exam Strategy: Don't just memorise the correct patterns — study each wrong approach deeply. Ask "what production failure mode does this anti-pattern cause?" That reasoning is exactly what the exam tests.

Compliance & Safety

Zero-Tolerance Compliance Pattern

Application-layer intercepts that programmatically enforce content and behavioral rules, rather than relying on prompt instructions.

❌ Anti-Pattern

Adding "never output prohibited terms" to the system prompt. Prompts can be forgotten, misunderstood, or bypassed.

✅ Correct Pattern

PreToolUse hook + post-generation filter that intercepts output and blocks prohibited content before it reaches the user. Programmatic guarantee.

Calibrating Human-in-the-Loop

Designing escalation triggers based on action reversibility, confidence, and business risk — not just uncertainty.

❌ Anti-Pattern

Always escalating uncertain actions OR never escalating (full automation). Both extremes fail differently.

✅ Correct Pattern

Escalate irreversible actions, high-value decisions, and compliance boundaries. Automate reversible, low-risk, high-confidence operations.

Context Management

Tool Context Pruning

Replace verbose tool outputs in message history with compact summaries to prevent context explosion in long sessions.

❌ Anti-Pattern

Accumulating raw tool outputs — full API responses, database dumps — in the message array. Context fills up rapidly.

✅ Correct Pattern

After processing each tool result, replace it with a structured summary: {"result":"success","key_findings":[...]}. Preserve signal, discard noise.

Compressing Long Sessions

Summarize conversation history at natural boundaries to extend effective session length.

❌ Anti-Pattern

Letting the conversation grow unbounded until hitting context limits, causing degraded performance or silent truncation.

✅ Correct Pattern

At milestone points — task completion, phase transitions — compress history into a structured summary. Use /compact or implement programmatic compression.

Resuming Async Sessions

Safely resume paused agentic sessions by filtering stale tool_results from prior execution.

❌ Anti-Pattern

Replaying the full message history on resume, including old tool_results that are no longer valid or relevant.

✅ Correct Pattern

Filter out tool_results older than the session pause time. Re-establish current state with fresh context. Use --resume flag in Claude Code.

Multi-Agent Orchestration

Goal-Oriented Delegation

The orchestrator tracks goals and outcomes; subagents handle implementation details. Prevents orchestrator context collapse.

❌ Anti-Pattern

Orchestrator receives full implementation details from every subagent, quickly filling its context window with irrelevant noise.

✅ Correct Pattern

Orchestrator sends goal + success criteria. Subagent returns {goal_achieved:true, summary:"...", artifacts:[...]}. Orchestrator never sees raw implementation.

Shared Memory Architecture

A vector store or key-value store accessible to all agents for persistent, cross-agent knowledge sharing.

❌ Anti-Pattern

Passing all shared state through message payloads between agents. Creates tight coupling and context bloat.

✅ Correct Pattern

Agents read/write to shared vector store. Semantic search for relevant memories. Each agent maintains minimal local context + shared memory reference.

Parallelization & Caching

Run independent subtasks concurrently; cache deterministic tool results to avoid redundant calls.

❌ Anti-Pattern

Sequential execution of independent tasks. Making identical API calls multiple times. No result reuse.

✅ Correct Pattern

Identify independent subtasks → execute in parallel. Hash tool inputs as cache keys. Serve cached results for identical calls within TTL window.

Tool Design & Execution

Forcing Execution Order (tool_choice)

Use tool_choice to guarantee Claude calls tools in the required sequence, especially when data dependencies exist.

❌ Anti-Pattern

Instructing Claude to "always call authenticate before query" via prompt. Claude may skip this if context implies authentication already happened.

✅ Correct Pattern

Step 1: tool_choice:{name:"authenticate"} → Step 2: after result, tool_choice:{name:"query"}. Programmatic sequencing guarantee.

Scratchpad Pattern

Internal reasoning space separate from final output — Claude thinks through the problem without contaminating the response.

❌ Anti-Pattern

Asking Claude to "think step by step" and including all reasoning in the final response. Output becomes verbose and unstructured.

✅ Correct Pattern

Use <thinking> tags or a dedicated reasoning field in tool output. Extract only the final structured result. Keep reasoning internal.

Structured Output

Structured Intermediate Representations

Type-safe intermediate objects between pipeline stages prevent errors from silently propagating through multi-step workflows.

❌ Anti-Pattern

Passing raw text between pipeline stages. Each stage re-parses and re-interprets, and can silently drift from the original intent.

✅ Correct Pattern

Define typed schemas for each stage's output. Stage N output is Stage N+1's validated input. Use tool_use for schema enforcement.

Designing Resilient Schemas

Schemas that gracefully handle unexpected values instead of failing silently or crashing downstream systems.

❌ Anti-Pattern

Strict enum schema: {type:"string", enum:["A","B","C"]}. Any novel value causes validation failure or a null that's hard to debug.

✅ Correct Pattern

Add "other" + detail field: {category:"other", category_detail:"new_type_X"}. Schema captures all inputs; new types become observable and trackable.

Mathematical Consistency Validation

Include calculated values alongside stated values to detect extraction errors in financial and numerical data.

❌ Anti-Pattern

Extracting only stated_total from a document. If the document itself has a calculation error, the extraction silently propagates it.

✅ Correct Pattern

Schema includes both stated_total AND calculated_total (sum of line items). If they differ → flag for human review. Catches both document errors and extraction errors.

Normalization & Null Handling

Explicit rules for representing missing, null, and equivalent values consistently across all extractions.

❌ Anti-Pattern

Allowing null, "", "N/A", "none", and absent fields to all represent "missing". Downstream systems break on the inconsistency.

✅ Correct Pattern

Define in schema: absent = not mentioned; null = explicitly absent; "" = invalid. Normalize equivalents ("USD", "$", "dollars""USD").

Architect's Reference Matrix

PatternProblem SolvedKey MechanismWhen to Use
Zero-Tolerance CompliancePrompt instructions can be bypassedApp-layer PreToolUse/PostGen interceptCompliance-critical systems
Tool Context PruningContext fills with raw tool outputsReplace outputs with summariesLong agentic sessions
Compressing Long SessionsContext exhaustion in long runs/compact or programmatic summarySessions >50% context used
Resuming Async SessionsStale state on resumeFilter old tool_results + --resumeLong-running async agents
Scratchpad PatternVerbose reasoning in output<thinking> tags / reasoning fieldComplex reasoning tasks
Shared Memory ArchitectureState sharing between agentsVector store with semantic searchMulti-agent coordination
Forcing Execution OrderNon-deterministic tool sequencetool_choice: {name: "specific_tool"}Required tool sequencing
Structured Intermediate RepsError propagation between stagesTyped schemas at pipeline boundariesMulti-stage pipelines
Parallelization & CachingSequential + redundant API callsConcurrent subtasks + result cachePerformance-critical workloads
Goal-Oriented DelegationOrchestrator context explosionReturn goals not implementation detailsComplex multi-agent systems
Resilient SchemasSchema validation failures"other" + detail fieldData extraction at scale
Mathematical ConsistencySilent numerical errorsstated vs calculated fieldsFinancial data extraction
Human-in-the-Loop CalibrationWrong automation/oversight balanceRisk-based escalation triggersAll production systems
Subagent Context IsolationContext leak / coordinator bloatPass structured goal payload, not historyAll multi-agent systems
PostToolUse NormalizationHeterogeneous tool output formatsPostToolUse hook normalises before model sees itMulti-MCP integrations
Same-Session Self-ReviewAnchoring bias in quality reviewfork_session for independent second opinionQuality-critical pipelines

8-Week Study Plan

Course links match the Anthropic Academy on Skilljar (launched March 2, 2026 — 10 days before the exam). Each course is free, self-paced, and awards a certificate.
Timeline Reality Check: Experienced AI devs typically need 2–4 weeks of focused prep. Developers new to Claude should expect 2–4 months, including time building real projects. Do not sit the exam without hands-on API and Claude Code experience.
W1

Foundations — Claude 101 + API Intro

Complete Claude 101 for orientation. Start Building with the Claude API (first half). Read the official exam guide end-to-end. Study stop_reason handling and message array construction.

W2

Agentic Architecture — API Course (D1)

Finish Building with the Claude API (second half). Build an agentic loop from scratch. Implement hub-and-spoke orchestrator + 2 subagents. Practice max 4–5 tools per agent and Goal-Oriented Delegation.

W3

MCP Deep Dive (D2)

Complete Introduction to MCP then MCP Advanced Topics. Build an MCP server with Tools + Resources + Prompts. Configure .mcp.json. Test isError and all three tool_choice modes.

W4

Claude Code Configuration (D3)

Complete Claude Code in Action and Introduction to Agent Skills. Set up CLAUDE.md hierarchy. Create custom commands + rules with glob patterns. Build a CI pipeline using --print and --output-format json.

W5

Prompt Engineering & Structured Output (D4)

Master the tool_use extraction pattern. Design resilient schemas with "other" fields. Implement the Validation Retry Loop pattern. Build a batch pipeline using the Message Batches API.

W6

Context Management & Reliability (D5)

Implement Tool Context Pruning and Compressing Long Sessions. Design a shared memory architecture. Add Confidence Calibration to an agent. Practice /compact and --resume patterns.

W7

Architect's Playbook Mastery

Study all 13 patterns cold — each anti-pattern vs correct pattern. Complete all 4 official preparation exercises. Review the Architect's Reference Matrix. Study the Claude Cookbooks for real code examples. Practise subagent context isolation and PostToolUse normalization patterns.

W8

Anti-Patterns Review & Mock Exams

Anti-patterns ARE the distractors — spend this week studying what NOT to do. Review the 5 key anti-patterns (natural language loop termination, prompt-based compliance, same-session self-review, >5 tools per agent, aggregate metrics hiding type-level failures). Then: complete all 15 practice questions, try the Udemy practice tests, and for every wrong answer spend 3× more time on the explanation than re-taking questions. Revisit any domain below 80% on the Tracker.

4 Official Preparation Exercises

Exercise 1 · Build an Agentic Loop

Implement a complete agentic loop that calls at least 2 different tools, handles stop_reason correctly, and gracefully recovers from a tool error (isError: true). Include exponential backoff for transient failures.

Exercise 2 · Create an MCP Server

Build an MCP server that exposes at least one Tool, one Resource, and one Prompt primitive. Configure it in .mcp.json. Implement structured error responses with errorCategory and isRetryable fields.

Exercise 3 · Configure Claude Code for a Team

Set up a complete Claude Code workspace: user-level CLAUDE.md with global conventions, project-level CLAUDE.md with architecture context, at least 2 custom commands, and a rule with a glob pattern scoped to a specific directory.

Exercise 4 · Design a Structured Extraction Pipeline

Build a data extraction pipeline using tool_use for schema enforcement. Include a resilient schema with "other" + detail fields, mathematical consistency validation (stated vs calculated totals), and batch processing with the Message Batches API.

One-stop resource list. Every link below goes directly to the official source — course, doc page, or repo. Bookmark this tab and work through each course in order.

🎓 Anthropic Academy 13 free courses · certificate on completion · no credit card

All courses are free and self-paced on anthropic.skilljar.com. Complete these in order for the best exam prep coverage.

⭐ Exam Priority

Building with the Claude API

8+ hr flagship course. Messages API → tool use → context windows → agentic architecture → RAG pipelines. Do this first.

⭐ Exam Priority

Introduction to MCP

Build MCP servers and clients from scratch. Master the three core primitives: tools, resources, and prompts.

⭐ Exam Priority

MCP: Advanced Topics

Advanced MCP patterns, transport mechanisms (stdio/SSE), tool boundary design, reasoning overload prevention.

Exam Priority

Claude Code in Action

CLAUDE.md hierarchies, custom slash commands, rules, skills, CI/CD integration with --print and JSON output.

Exam Priority

Introduction to Agent Skills

Build, configure, and share Skills in Claude Code — reusable markdown instructions with fork/current context modes.

Claude 101

Official baseline on Claude usage, everyday workflows, and core capabilities. Good orientation before the API course.

Claude in Amazon Bedrock

Claude deployment via AWS Bedrock — useful if your production environment is on AWS.

Claude with Google Vertex AI

Claude deployment via GCP Vertex AI — useful if your production environment is on GCP.

AI Fluency + 5 more courses →

AI Fluency Framework, Teaching AI Fluency, AI Fluency for Students, Nonprofits, and Educators tracks also available.

📖 Official Documentation

Tool Use Guide

JSON Schema definition, tool_choice modes (auto / any / forced), structured output patterns, tool_use_id matching.

Message Batches API

50% cost savings, 24-hour window, custom_id mapping, batch result polling. Critical for D4 questions.

MCP Integration Guide

Tools/Resources/Prompts primitives, .mcp.json vs ~/.claude.json, isError handling.

Claude Code Docs (Official)

Complete reference for CLAUDE.md, custom commands, rules, skills, plan mode, and all CLI flags.

Anthropic API Reference

Complete API reference — Messages, tool_use blocks, stop_reason values, and all request parameters.

Agent Skills Engineering Blog

Anthropic engineering post on Agent Skills — how real-world skills are designed and deployed.

Agent Skills Overview (Platform Docs)

Official platform docs for Agent Skills — configuration, context modes, and deployment.

Claude.com Learning Resources

Official Claude learning hub — links to all courses, guides, and certification paths.

⚙️ GitHub: Code Examples & Repos

anthropics/claude-cookbooks

Official collection of notebooks and recipes — agentic loops, tool use, RAG, multi-agent patterns. Copy-paste Python examples.

anthropics/claude-quickstarts

Deployable starter apps including a two-agent pattern (initializer + coding agent) built on the Claude Agent SDK.

anthropics/claude-code

Claude Code source and documentation — understand how CLAUDE.md, rules, commands, and skills are actually loaded.

anthropics/skills

Public repository of Agent Skills — real examples of SKILL.md frontmatter, fork vs current context, and allowed-tools.

anthropics/anthropic-sdk-python

Official Python SDK with typed interfaces for the Messages API, tool use, and batch processing.

awesome-claude-code (community)

Curated list of skills, hooks, slash commands, agent orchestrators, and plugins for Claude Code.

📝 Exam-Specific Resources

Udemy: CCA Practice Tests

Third-party practice exam questions for the Claude Certified Architect Foundations certification.

Awesome Claude — Resource Directory

Community-curated directory of Claude AI resources, tools, integrations, and examples.

Architect Cert MCP (LobeHub)

An MCP server specifically built to help study for the CCA exam — practice questions via MCP interface.

Anthropic Academy (Main Hub)

Your primary study destination. Start here, complete all 5 exam-priority courses, earn your certificates.

🔧 Technology Quick Reference

Claude Agent SDK

Task toolfork_sessionAgentDefinitionallowedToolsPostToolUse hookPreToolUse hook"tool_use""end_turn"

Claude Code CLI

CLAUDE.md.claude/commands/.claude/rules/.claude/skills/-p / --print--output-format json--resume/compact/memory

MCP Protocol

ToolsResourcesPrompts.mcp.json~/.claude.jsonisError: trueerrorCategoryisRetryable

Practice Questions

From the CCA Foundations Exam Guide v0.1 + community-derived scenarios. Answer mentally before revealing — this is the most important study technique.

🎯 Exam Strategy — Anti-Patterns ARE the Distractors: The wrong answers on this exam are not random — they are architectural mistakes engineers commonly reach for before understanding production implications. If you can spot what's architecturally wrong with an option, you can eliminate 2–3 choices immediately and find the correct answer by exclusion. After each practice set: spend more time analysing why each wrong answer is wrong than re-taking tests. The 5 most-tested anti-patterns: (1) parsing natural language to detect loop end instead of checking stop_reason, (2) prompt-based enforcement for compliance-critical rules, (3) same-session self-review instead of fork, (4) >5 tools per agent, (5) aggregate accuracy metrics that mask per-type failure rates.
Q1 · D1 Agentic Architecture
An architect is designing an agentic system that calls an external API which occasionally returns transient 503 errors. Which approach best ensures the system handles this gracefully?
A) Immediately escalate all 503 errors to human reviewers
B) Implement exponential backoff with jitter and a maximum retry limit
C) Cache all API responses to avoid the need for retries
D) Switch to a different API provider after the first error
✅ B — Exponential backoff with jitter

Exponential backoff is the standard pattern for transient errors. It prevents retry storms (via jitter), gives the service time to recover (via exponential delays), and the maximum retry limit prevents infinite loops. Caching (C) doesn't address the error itself. Escalation (A) wastes human time on transient issues. Switching providers (D) is extreme for a recoverable error.
Q2 · D1 Agentic Architecture
A developer is building a customer support agent that needs to search a knowledge base and escalate complex issues. Which stop_reason value indicates that Claude wants to use a tool?
A) "end_turn"
B) "max_tokens"
C) "tool_use"
D) "stop_sequence"
✅ C — "tool_use"

"tool_use" is the stop_reason that signals Claude wants to invoke one or more tools. The agentic loop should parse the tool_use blocks, execute the tools, append tool_result messages with matching tool_use_id, and continue. "end_turn" (A) means Claude is done. "max_tokens" (B) means the response was truncated. "stop_sequence" (D) means a stop string was hit.
Q3 · D3 Claude Code
An architect needs to configure Claude Code to automatically apply specific coding standards to files in the /src/api/ directory. Which configuration approach is most appropriate?
A) Add the standards to the top of every prompt sent to Claude Code
B) Create a custom command in .claude/commands/ that applies standards
C) Configure a .claude/rules/ YAML file with a glob pattern for /src/api/
D) Use the --print flag with the standards as a prefix
✅ C — .claude/rules/ with glob pattern

Rules with YAML frontmatter glob patterns activate automatically when files matching the pattern are in context — exactly the right tool for directory-scoped, automatic behaviors. Custom commands (B) require manual invocation. Repeated manual prompt prefix (A) is error-prone and inconsistent. The --print flag (D) is for CI mode, unrelated to context injection.
Q4 · D3 Claude Code
A team wants Claude Code to follow specific API design conventions from their architecture docs. Where should they place these instructions for maximum effectiveness across all team members?
A) In the system prompt of each Claude API call
B) In a project-level .claude/CLAUDE.md file committed to the repository
C) In each individual prompt sent to Claude Code
D) In environment variables on each developer's machine
✅ B — Project-level CLAUDE.md committed to the repo

Project-level CLAUDE.md is automatically loaded for all team members working in that project — persistent architectural context with zero per-developer configuration. Individual prompts (C) are inconsistent across the team. Environment variables (D) don't inject architectural context. System prompt per API call (A) is manual and not what CLAUDE.md is for.
Q5 · D1 Agentic Architecture
An architect is designing a multi-agent system where an orchestrator delegates to several specialized subagents. Which pattern best prevents the orchestrator from losing context as tool outputs accumulate?
A) Summarize completed subtask results and replace the detailed tool outputs in the message history
B) Increase the context window allocation for the orchestrator agent
C) Have each subagent maintain its own context independently
D) Use streaming to process context incrementally as it arrives
✅ A — Tool Context Pruning

Replacing detailed tool outputs with compact summaries prevents context explosion — this is the Tool Context Pruning pattern. It preserves essential signal while eliminating noise. Increasing the context window (B) is a workaround, not a solution. Independent subagent context (C) doesn't solve the orchestrator's accumulation problem. Streaming (D) addresses latency, not context size.
Q6 · D1 Agentic Architecture
A compliance-critical application must ensure Claude never outputs certain prohibited terms. Which approach provides the strongest guarantee?
A) Include "never output prohibited terms" in the system prompt
B) Apply constitutional AI principles through careful model selection
C) Implement application-layer output filtering that intercepts Claude's responses before they reach users
D) Fine-tune the model to avoid these terms
✅ C — Application-layer output filtering

This is the Zero-Tolerance Compliance Pattern. "Code guarantees. Prompts suggest." Application-layer filtering provides a deterministic, programmatic guarantee. System prompt instructions (A) can be misunderstood or bypassed. Constitutional AI (B) and fine-tuning (D) are both out of exam scope and don't provide deterministic guarantees anyway.
Q7 · D2 Tool Design & MCP
An MCP server tool call fails due to an invalid parameter. What is the correct way to signal this error while allowing the agentic loop to continue and potentially recover?
A) Throw an exception from the MCP server, causing the loop to terminate
B) Return a tool result response with isError: true and a descriptive message
C) Return an empty response to signal failure
D) Close the MCP connection to force reconnection
✅ B — isError: true in the tool result

Returning isError: true signals the error to Claude while keeping the agentic loop alive. Claude can then retry with different parameters, ask for clarification, or escalate. Throwing an exception (A) or closing the connection (D) kill the loop. An empty response (C) is ambiguous — Claude has no actionable information to work with.
Q8 · D4 Prompt Engineering
A developer needs to extract structured data from documents and ensure the output always conforms to a specific JSON schema. Which approach best guarantees schema compliance?
A) Include the JSON schema in the system prompt and ask Claude to follow it
B) Use tool_use with the target structure defined as the tool's input schema
C) Use few-shot examples of correctly formatted outputs
D) Parse and validate the output after generation, retrying on failure
✅ B — tool_use with the schema as tool input schema

Defining the target structure as a tool's input schema forces Claude to produce schema-valid output before the tool call can be made — a structural guarantee. System prompt (A) and few-shot (C) are suggestions, not guarantees. Post-validation retry (D) works but adds latency, failure risk, and API cost.
Q9 · D4 Prompt Engineering
An architect is designing a pipeline to extract structured data from 10,000 documents. Which Claude API feature provides the most cost-effective approach for this scale?
A) Streaming responses to reduce time-to-first-token
B) The Message Batches API with 50% cost savings
C) Prompt caching to reuse repeated context prefixes
D) Parallel API calls with multiple API keys
✅ B — Message Batches API (50% cost savings)

The Message Batches API provides a flat 50% cost reduction for large-scale, non-interactive workloads — the most cost-effective choice at 10,000 documents. Streaming (A) reduces latency, not cost. Prompt caching (C) helps with repeated context but the savings vary. Multiple API keys (D) don't reduce the per-request cost at all. Key caveat: 24-hour window, no multi-turn tool calling in batch mode.
Q10 · D3 Claude Code
A long-running Claude Code agentic session needs to be paused overnight and resumed the next morning with full session state. Which Claude Code feature enables this?
A) The /compact command to compress the context before pausing
B) The --resume flag to continue a saved session from where it left off
C) The /memory command to save session state to CLAUDE.md
D) Environment variables to store the session ID for later retrieval
✅ B — --resume flag

The --resume flag in Claude Code allows resuming a paused agentic session with full session state preserved. /compact (A) compresses context for the current session but doesn't enable resume. /memory (C) manages persistent CLAUDE.md memory files, not session state. Environment variables (D) don't provide built-in session persistence.
Q11 · D2 Tool Design & MCP
An architect needs to design a workflow where Claude must always call an "authenticate" tool before any "query" tools. Which API configuration enforces this sequencing requirement?
A) Set tool_choice to "auto" and include sequencing instructions in the system prompt
B) Set tool_choice to "any" to ensure Claude uses at least one tool
C) In step 1, force tool_choice to "authenticate"; in step 2, force tool_choice to the query tool
D) Include the tool name in the stop_sequences array
✅ C — Sequential forced tool_choice per step

This is the Forcing Execution Order pattern. Step 1: tool_choice:{type:"tool",name:"authenticate"} → Step 2: after receiving the auth result, tool_choice:{type:"tool",name:"query_tool"}. Programmatic sequencing guarantee. "auto" (A) relies on prompts. "any" (B) only guarantees some tool is used, not which one. stop_sequences (D) don't control tool invocation.
Q12 · D1 Agentic Architecture
A multi-agent research system has an orchestrator delegating to specialist subagents. The orchestrator needs to track task completion without its context filling with all subagent implementation details. Which pattern addresses this?
A) Scratchpad Pattern — subagents write reasoning to internal scratchpad
B) Goal-Oriented Delegation — orchestrator receives goal completion status, not implementation details
C) Shared Memory Architecture — all agents read/write to a central vector store
D) Compressing Long Sessions — apply /compact at context boundaries
✅ B — Goal-Oriented Delegation

The orchestrator sends a goal + success criteria. The subagent returns {goal_achieved:true, summary:"...", artifacts:[...]}. The orchestrator never accumulates raw implementation details. Scratchpad (A) is about isolating internal reasoning, not reducing orchestrator context. Shared Memory (C) helps coordination but doesn't prevent orchestrator bloat. /compact (D) is a reactive measure, not a preventive architectural pattern.

Community Practice Scenarios

Q13 · D1 Multi-Agent
A developer is building a multi-agent research system where a coordinator spawns specialist subagents. One subagent needs to access the coordinator's prior conversation history to provide context-aware results. How should this be implemented?
A) The subagent automatically inherits the coordinator's full conversation history
B) Pass a structured context payload from the coordinator containing only the relevant information for that goal
C) Have the coordinator send its entire message array to each subagent on invocation
D) Use a shared session ID so all agents access the same conversation context
✅ B — Structured context payload with only relevant information

Subagents have isolated context by default — they do NOT inherit the coordinator's conversation history automatically. The coordinator must explicitly pass context to each subagent, and should pass only what's needed for that specific goal (Goal-Oriented Delegation principle). Passing the entire message array (C) defeats the purpose of context isolation and causes token bloat. A "shared session ID" (D) is not how the Agent SDK works. Option A is a common misconception that the exam specifically tests against.
Q14 · D1 Agentic Loop
An architect reviews a colleague's agentic loop implementation. The loop uses the following logic: if "I have completed the task" in response.content[0].text: break. What is the primary problem with this approach?
A) The string comparison is case-sensitive and may miss variations
B) It parses natural language to detect loop termination instead of checking stop_reason
C) The loop should check all content blocks, not just the first one
D) Natural language completion signals are more flexible than stop_reason
✅ B — Natural language parsing for loop termination is the anti-pattern

This is a classic exam trap. The correct way to detect loop completion is to check response.stop_reason == "end_turn". Natural language parsing is fragile — Claude may phrase completion differently, may not include that phrase even when done, or may include it prematurely. "Code guarantees. Prompts suggest." stop_reason is a deterministic, programmatic signal. A and C are minor implementation details, not the core flaw. D is incorrect — natural language signals are never more reliable than structured API values.
Q15 · D2 MCP / PostToolUse
A multi-agent system integrates 4 different MCP servers. Each server returns timestamps in a different format (ISO 8601, Unix epoch, human-readable strings, and relative times like "2 hours ago"). The architect wants to normalize all timestamps to ISO 8601 before Claude processes them. What is the most appropriate implementation pattern?
A) Add a system prompt instruction: "Always interpret timestamps as ISO 8601"
B) Implement a PostToolUse hook that normalizes timestamp formats before the model receives tool results
C) Update each MCP server to return ISO 8601 directly
D) Ask Claude to normalize timestamps as the first step of each task
✅ B — PostToolUse hook for normalization

The PostToolUse hook fires after every tool execution and before the model processes the result — making it the ideal insertion point for normalizing heterogeneous data formats. This is a programmatic guarantee that timestamps will always be standardized before Claude reasons about them. System prompt (A) is a suggestion, not a guarantee. Modifying each server (C) may not be feasible when integrating third-party MCPs. Asking Claude to normalize (D) adds latency and is unreliable. This question tests whether candidates understand that PostToolUse is for data normalization, not just logging.

Agentic Loop (Python)

import anthropic client = anthropic.Anthropic() messages = [] def run_agent(user_input: str): messages.append({"role": "user", "content": user_input}) while True: response = client.messages.create( model="claude-opus-4-6", max_tokens=4096, tools=TOOLS, messages=messages ) messages.append({ "role": "assistant", "content": response.content }) if response.stop_reason == "end_turn": break # ← done if response.stop_reason == "tool_use": tool_results = [] for block in response.content: if block.type == "tool_use": result = execute_tool(block.name, block.input) tool_results.append({ "type": "tool_result", "tool_use_id": block.id, # ← must match "content": result }) messages.append({ "role": "user", "content": tool_results })

tool_choice Modes

# Claude decides whether to use a tool tool_choice={"type": "auto"} # Claude MUST use some tool tool_choice={"type": "any"} # Claude MUST use THIS specific tool tool_choice={"type": "tool", "name": "authenticate"}

MCP Error Response

# Success {"content": [{"type": "text", "text": result}]} # Error — loop continues, Claude can recover { "isError": True, "content": [{"type": "text", "text": "Error: invalid param 'x'"}], "errorCategory": "INVALID_INPUT", "isRetryable": False }

Message Batches API

batch = client.beta.messages.batches.create( requests=[{ "custom_id": "doc-001", # ← match results "params": { "model": "claude-opus-4-6", "max_tokens": 1024, "messages": [{...}] } }] # supports thousands of requests ) # 50% cost savings # 24-hour processing window # No multi-turn tool calling

CLAUDE.md Hierarchy

# 1. User-level — applies to ALL projects ~/.claude/CLAUDE.md # 2. Project-level — applies to THIS project .claude/CLAUDE.md # 3. Directory-level — highest precedence src/api/CLAUDE.md # More specific overrides less specific

Claude Code CLI Flags

# Headless / CI mode claude -p "Generate tests for auth.py" # Structured JSON output for scripting claude -p "..." --output-format json # Validate output against schema claude -p "..." --json-schema schema.json # Resume a paused session claude --resume <session-id> # Compress context mid-session /compact # View active memory /memory

.mcp.json Config

{ "mcpServers": { "my-server": { "command": "node", "args": ["./mcp-server.js"], "env": { "API_KEY": "${MY_API_KEY}" } } } } # Project scope: .mcp.json # User scope: ~/.claude.json

Resilient Schema

{ "category": { "type": "string", "enum": ["billing", "technical", "general", "other"] # ← always add "other" }, "category_detail": { # ← capture specifics "type": ["string", "null"] }, "stated_total": {"type": "number"}, "calculated_total": { # ← consistency check "type": "number", "description": "Sum of line items" } }

Key Distinctions

  • stop_reason "tool_use" → parse & execute tools, loop continues
  • stop_reason "end_turn" → exit loop, return response
  • isError: true → tool failed, loop stays alive, Claude can adapt
  • .mcp.json = project scope  ·  ~/.claude.json = user scope
  • Plan mode → propose first, get approval, then execute
  • context: fork = isolated  ·  context: current = shared session
  • tool_choice "any" = some tool (use for structured output)
  • tool_choice forced = this specific tool (use for sequencing)

5 Anti-Patterns That Appear as Distractors

Anti-PatternWhy It's WrongCorrect Approach
Natural language loop terminationFragile, unreliable, not deterministicCheck stop_reason == "end_turn"
Prompt-based compliance enforcementPrompts suggest; can be bypassed or misinterpretedPreToolUse hook + app-layer intercept
Same-session self-reviewAnchoring bias — same session can't objectively review its own outputUse fork_session for independent review
18+ tools per agentTool selection reliability degrades significantly above ~5 toolsMax 4–5 tools, role-specific agents
Aggregate accuracy metrics onlyHides per-document-type or per-category failure ratesTrack accuracy broken down by type/category

Production Implementation Rules

⚠️ Max 4–5 Tools Per Agent

Tool selection reliability degrades with more than ~5 tools. Design narrow, role-specific agents rather than one agent with many tools.

Hub-and-Spoke Multi-Agent Pattern

coordinator = AgentDefinition( system="Route tasks to specialists", tools=["delegate_to_research", "delegate_to_writer", "delegate_to_reviewer"] ) # Subagents NEVER talk to each other directly # All results flow back through the coordinator

Validation Retry Loop Pattern

for attempt in range(3): result = extract_structured(doc) error = validate_schema(result) if not error: return result # Feed error back as user turn messages.append({ "role": "user", "content": f"Schema error: {error}. Retry." })

Confidence Calibration Pattern

# Include confidence in your tool schema { "answer": {"type": "string"}, "confidence": { "type": "number", "minimum": 0, "maximum": 1 }, "reasoning": {"type": "string"} } # Then escalate if confidence < threshold if result.confidence < 0.7: escalate_to_human(result)

Progress Tracker

Check off topics as you master them. Progress auto-saves in your browser.

Overall Readiness0%

D1 · Agentic Architecture

0 / 12

D2 · Tool & MCP

0 / 9

D3 · Claude Code

0 / 9

D4 · Prompt Engineering

0 / 9

D5 · Context & Reliability

0 / 8