In December 2025, we wired Claude into the Facebook Marketing API via a custom MCP server. The idea: Claude reviews our campaigns every morning, proposes actions, we validate in 5 minutes instead of 45. Five months later, across three D2C clients, the verdict comes down to numbers and three guardrails we hadn't anticipated.
Context: why MCP for Facebook Ads
MCP — Model Context Protocol — is the standard published by Anthropic in November 2024 to plug tools into LLMs in an interoperable way. We were already using it for CRM integrations. Natural to try on D2C clients' pain point #1: slow, fragmented Facebook dashboards, and duplicated actions between us, the client, and the media agency.
Stated goal: cut daily review time from 45 to 5 minutes without losing pilot quality.
Architecture in 4 components
- Custom MCP server (TypeScript, ~300 lines) exposing 12 tools:
list_campaigns,get_metrics,suggest_action,create_ad_variant, etc. - Facebook Marketing API wrapper with full read and limited write (action whitelist).
- Validation layer: no action on a campaign > €50/day without explicit human approval. Everything is dry-run by default.
- Structured logging (Datadog): every tool call, every suggestion, every human action is traced. Full audit trail for compliance and debug.
What Claude does well
- Anomaly detection: CPA drifting +30% over 48h, audience saturation (frequency > 4), ad set burning budget without converting. Claude surfaces them in 30 seconds instead of a manual deep-dive.
- Creative variant generation: from a brand bible + 3 current winners, Claude proposes 8 textual variants to test. ~30% survive the human gateway, ~50% of survivors beat control on CTR.
- Weekly synthesis: 4 structured paragraphs (top performers, alerts, recommendations, weekly focus) instead of a Looker screenshot to parse.
- Scaling suggestions with quantified justification: "double Campaign X budget — ROAS 3.8 stable over 14 days, frequency 1.2".
The guardrails we added (after getting burned)
Month 1, week 3: Claude doubled a budget by misinterpreting an ambiguous message ("we can push a bit on this one"). Estimated loss: ~€800. Not catastrophic, but educational.
Since then, three mandatory layers:
- Strict action whitelist: 8 actions allowed autonomously, 4 sensitive ones require explicit confirmation via human button.
- Dry-run by default on every write action: Claude shows the before/after diff, we confirm, we apply.
- Eval dataset of 50 historical edge cases (ambiguous briefs, missing data, priority conflicts) replayed weekly. If score drops below 90%, deployment is blocked.
Numbers after 5 months
Three D2C clients, ~€120k cumulative monthly spend:
- Human media buyer time: –60% on repetitive tasks (daily review, reports, A/B test setup)
- Average ROAS: +18% on piloted accounts vs a similar non-assisted control
- Time-to-insight: 30 minutes on average to identify a scaling opportunity vs 2-3 days historically
- Media buyer satisfaction: 9/10 (internal) — they want to keep the tool
But most importantly: Claude has not replaced the human media buyer. It augmented them. The buyer does fewer spreadsheets and more creative strategy — that's where the value is, and Claude doesn't touch it.
Open source? Not yet, here's why.
The MCP server runs in production. But it has three client-specific hacks we don't want to generalize publicly (strategic bid logic, custom audience structure, proprietary naming convention parsing).
Once we abstract those, public release with a detailed technical post. If you want us to deploy it on your Facebook Ads account in the meantime, let's talk.