Anthropic's Claude Certified Architect, Foundations (CCA-F) is their new certification for building production AI systems with Claude. Not prompting quizzes. Not multiple choice trivia. Scenario-based architecture questions that test whether you can actually design multi-agent systems, manage context budgets, and make the right tradeoffs under constraints.
A Redditor (u/Significant-Hornet37) released a full open-source prep guide on GitHub with 33 practice questions, a concepts cheat sheet, and real exam tips. Full credit to them for putting this together.
Repo: github.com/avidevelops/claude-architect-exam-prep
The Problem Nobody Talks About
Most certifications reward memorisation. You grind flashcards, regurgitate definitions, pass, forget everything by Friday.
The CCA-F doesn't work like that. The questions describe real production scenarios and give you four answers that all technically solve the problem. You need to pick the one that reflects best-practice architecture. If your instinct is "just improve the prompt", you'll fail.
There's no official study guide from Anthropic yet. The exam reflects newer patterns in Claude's ecosystem (MCP, subagent orchestration, batch API), so outdated material won't cut it.
Why This Changes Everything
The GitHub repo (avidevelops/claude-architect-exam-prep) is the closest thing to a structured study plan that exists right now.
33 scenario-based practice questions that mirror the actual exam format. Each one includes a detailed explanation of why the correct answer is correct and why the others fall short.
Covers all three core domains. Agentic architecture, API orchestration, and context management. These make up the bulk of the exam.
Written by someone who actually passed.The tips aren't theoretical. They're patterns the exam taker noticed after sitting the real thing.
The Three Domains You Need to Know
Domain 1: Agentic Architecture
This is about designing multi-agent systems. When to spawn a subagent vs. keeping work in the coordinator. How to decompose workflows. Parallel execution via simultaneous tool calls.
The biggest trap: spawning subagents when the coordinator already holds the context. If the data is already there, processing it locally is the right call. Subagents are for fresh exploration.
Domain 2: API and Orchestration
Tool design, structured JSON output for chaining, input constraints. You need to understand:
- Using enums to lock down ambiguous inputs instead of relying on prompt instructions
- Pagination design that prevents context from ballooning
- Machine identifiers over human-readable text (if an agent acts on an item, it needs an explicit ID, not a name)
tool_choiceenforcement for prerequisite dependencies
Domain 3: Context Management
The exam tests whether you understand information loss in long pipelines. Specific topics:
- The "lost in the middle" effect and how to structure context to avoid it
- Trimming reasoning chains before passing to downstream agents
- Conflict detection flags in structured outputs
- Citation IDs for maintaining provenance across handoffs
5 Tips That Separate Pass from Fail
1. Always pick the structural answer
When two options exist and one uses schema constraints, backend enforcement, or explicit IDs while the other says "refine the prompt" or "add instructions", pick the structural one. The exam consistently rewards deterministic solutions over probabilistic ones.
2. Subagent spawning is a trap answer
The repo calls this out specifically. If the coordinator already has the context, don't spawn. Subagents add latency and lose context. Only use them for genuinely independent work streams.
3. MCP annotations are untrusted metadata
Security decisions come from system-level trust boundaries, not what a tool reports about itself. The exam tests whether you understand that MCP server annotations are informational, not authoritative.