TL;DR

Most companies planning to use MCP are choosing to outsource rather than build in-house. The reasons are consistent: MCP security requires specialized expertise most teams don't have, testing and validating servers is more resource-intensive than expected, and poorly written tool descriptions cause agents to misbehave in production. Add the hidden cost of enterprise sandbox access — weeks of procurement and thousands in licensing — and the case for outsourcing becomes hard to argue with.

MCP adoption is accelerating. The protocol that Anthropic introduced in late 2024 has gone from a niche developer experiment to production infrastructure in under two years. Companies building AI agents need their agents to read from and write to enterprise systems — CRMs, ERPs, HRIS platforms, file storage — and MCP is how that connection works.

But here's what the hype cycle doesn't tell you: roughly three out of four companies with MCP on their roadmap are planning to hand the actual building to specialists rather than doing it themselves. That finding comes from a recent survey of 215 product managers and engineers at companies actively building AI agents — and it aligns with what we see in every sales conversation.

That number isn't surprising once you understand what building an MCP server for enterprise use actually involves. It's not a weekend project. It's not a wrapper around an API. It's a security-critical piece of infrastructure that sits between your AI agent and your customer's most sensitive data.

Here's why most teams are making the rational choice to outsource.

Why is MCP server security so hard to get right?

Seven in ten companies cite security vulnerabilities as the primary reason they won't build MCP servers in-house. And they're right to worry.

The MCP security track record so far is alarming. In just the first two months of 2026, security researchers filed over 30 CVEs targeting MCP servers, clients, and infrastructure. These weren't exotic zero-days — they were missing input validation, absent authentication, and blind trust in tool descriptions. The root causes were basic, but the consequences were severe: remote code execution, credential leaks, data exfiltration.

Some of the highest-profile incidents hit the most trusted names. Anthropic's own official Git MCP server had three medium-severity vulnerabilities — path traversal, arbitrary repository creation, and file overwriting — all exploitable through prompt injection. If the reference implementation gets security wrong, that's a signal the entire ecosystem needs deeper scrutiny.

The Astrix Research team's analysis of over 5,200 open-source MCP server implementations found that 88% require credentials, but over half rely on static API keys or personal access tokens that are rarely rotated. Only 8.5% use OAuth. Nearly 80% of API keys are passed via plain environment variables. A single server compromise leaks everything.

This isn't the kind of security work you hand to a full-stack engineer who's also building product features. MCP security requires understanding tool poisoning attacks (where malicious tool descriptions trick agents into executing unintended operations), credential management patterns specific to enterprise APIs, and the OWASP Agentic Security Top 10 framework that maps almost perfectly to the MCP vulnerability landscape.

How do you test an MCP server before production?

A similar proportion of companies — roughly seven in ten — say that validating and testing MCP servers is too resource-intensive to handle alongside product development.

This tracks with reality. Testing an MCP server isn't like testing a REST API. You're not just verifying that endpoints return correct responses. You're testing whether an AI agent — which is non-deterministic by nature — consistently selects the right tool, passes the right parameters, handles errors gracefully, and doesn't take unintended actions.

That means measuring hit rates, success rates, and tracking unnecessary tool calls that add latency, cost, and confusion. A tool with a high hit rate but low success rate tells you the descriptions are clear but execution is broken. A high success rate with a low hit rate means the tools aren't discoverable enough.

You also need to test security scenarios: what happens when a malicious prompt tries to trick the agent into calling the wrong tool? What happens when tool descriptions change between sessions? What happens when the MCP server goes down and comes back up — does the agent reconnect, or do tool calls silently fail?

Most product teams don't have the testing frameworks, the security playbooks, or the time to do this properly. They have a product roadmap and a CEO asking when the AI feature ships.

Why do MCP tool descriptions break AI agents?

Over half of companies report that vague or incomplete tool descriptions in off-the-shelf MCP servers cause agents to call wrong tools or skip actions entirely.

This is arguably the most underappreciated problem in MCP development. Tool descriptions aren't documentation — they're part of your AI agent's prompt. The LLM reads them to decide which tool to call and how to fill the arguments. A poorly worded description doesn't just cause confusion; it can silently change your agent's behavior in ways that are hard to detect and harder to debug.

When your agent sees 20+ tool definitions with overlapping descriptions, it starts guessing. It picks the wrong tool. It passes malformed arguments. It makes unnecessary calls that add latency and break customer trust. Research on MCP server composition found that most tool-calling mistakes happen because the LLM is forced to choose from too many overlapping definitions — a problem that gets worse as you connect to more enterprise systems.

Writing good tool descriptions is a craft. It requires understanding how the LLM interprets metadata, how to scope tool definitions narrowly enough to avoid confusion, and how to structure parameter schemas so the agent fills them correctly. It also requires iterative testing: write the description, observe agent behavior, adjust, repeat.

A specialist team that has built MCP servers for SAP, NetSuite, Salesforce, and other enterprise systems has already gone through these iterations. They know which descriptions work and which ones cause agents to misfire. Your team would be starting from scratch.

How much does enterprise sandbox access cost?

Here's a cost that doesn't show up in any MCP tutorial: to build an MCP server that connects to an enterprise system, you need a sandbox environment of that system to develop and test against. And sandbox access is neither cheap nor fast.

A NetSuite sandbox costs roughly 10% of your annual license — which for most mid-sized deployments means EUR 5,000–15,000 per year. SAP sandbox environments can run significantly higher depending on the modules. And these aren't instant: procurement, provisioning, and configuration can take weeks before your engineers write a single line of code.

This matters because MCP servers are often built before your first customer uses them. You're building the server speculatively — betting that customers will need SAP or NetSuite connectivity — and you need a working sandbox to develop and test against. If you're building MCP servers for three or four enterprise systems, the sandbox costs alone can reach EUR 30,000–50,000 annually before you've delivered anything.

A specialist team that maintains its own sandbox environments absorbs this cost across multiple clients. You get the benefit of real enterprise system access without the procurement overhead, the licensing fees, or the weeks of waiting. Your engineers don't need SAP credentials. They don't need NetSuite training. They don't need to learn D365's authentication quirks. The specialist already has all of that.

How much does it cost to build an MCP server in-house vs outsourcing?

Let's put realistic numbers on it.

Building in-house:

  • Engineer time: 4–8 weeks per MCP server (first build)
  • Sandbox access: EUR 5,000–15,000/year per enterprise system
  • Security audit and testing: 2–4 additional weeks
  • Ongoing maintenance: 10–20% of build time annually
  • Opportunity cost: every week your engineers spend on MCP plumbing

Outsourcing to a specialist:

  • Delivery: 1–2 weeks per MCP server
  • Cost: EUR 5,000–15,000 per server (one-time)
  • Sandbox included (specialist maintains their own environments)
  • Security testing included
  • Full code ownership after handoff
  • Your engineers stay on the product roadmap

For most B2B SaaS companies, the math is clear. Unless you plan to make MCP server development a core competency — and hire dedicated engineers for it — outsourcing is faster, cheaper, and less risky.

When does building MCP servers in-house make sense?

To be fair, there are scenarios where in-house development is the right call.

Your team has engineers who already understand both MCP and the target enterprise API. They've built MCP servers before. They understand tool description design, credential management, and agent testing. They're not learning — they're executing.

The MCP server is core to your product's competitive moat. If your entire product is an AI agent that connects to enterprise systems, then MCP server quality is your differentiator. Outsourcing your core differentiator rarely makes sense.

You're connecting to your own internal systems, not customer-facing enterprise platforms. Internal MCP servers that connect to your own database or API are simpler — no sandbox procurement, no enterprise API quirks, no per-customer variation.

If none of these apply, outsourcing is likely the better path.

What should you look for in an MCP development partner?

If you decide to outsource, not all teams are equal. Here's what separates a genuine MCP specialist from a generalist dev shop:

They have enterprise sandbox environments. If they need to provision sandbox access on your dime and your timeline, they're not a specialist — they're learning on the job.

They show you tool description examples. Ask to see how they've described tools for SAP or NetSuite. The quality of the descriptions tells you more about their expertise than any sales pitch.

They deliver full source code. No subscription model, no managed runtime, no vendor lock-in. You own the MCP server after handoff.

They include security testing. Credential management, input validation, tool poisoning resistance — these should be part of delivery, not a separate engagement.

They deliver documentation. Data flow diagrams, runbooks, and maintenance guides so your team can own the server going forward.


FAQ

Why are most companies outsourcing MCP server development?

Three reasons dominate: security vulnerabilities require specialized expertise (credential leaks, tool poisoning, prompt injection), testing and validation is too resource-intensive for product teams to handle alongside feature development, and poor tool descriptions in off-the-shelf servers cause agents to misbehave. Add enterprise sandbox costs and procurement delays, and in-house development becomes impractical for most teams.

How much does it cost to build an MCP server?

In-house: EUR 15,000–40,000+ per server when you factor in engineer time, sandbox access, security testing, and opportunity cost. Outsourced to a specialist: EUR 5,000–15,000 per server, one-time, with sandbox access and security testing included. The specialist route is typically 2–3x faster and includes enterprise system expertise that in-house teams would need months to develop.

What are the biggest security risks with MCP servers?

The top risks are credential exposure (over half of MCP servers rely on static API keys), tool poisoning (malicious tool descriptions that trick agents into unintended actions), and insufficient observability (most teams can't track what their agents are doing through MCP connections). The Endor Labs analysis found that 82% use file operations vulnerable to path traversal and 67% use sensitive APIs prone to code injection.

Is MCP dying?

No. MCP adoption is accelerating — the ecosystem grew from a few hundred servers in early 2025 to over 20,000 implementations on GitHub by late 2025. The security concerns are real, but they're driving maturity, not abandonment. Companies are being more careful about how they build and who they trust to build MCP servers, which is why outsourcing to specialists is increasing.

What's the difference between an MCP server and a traditional API integration?

A traditional API integration is a direct, hardcoded connection between your product and a specific system. An MCP server is a standardized layer that lets AI agents discover and call tools dynamically. The agent doesn't need to know the specific API — it discovers available tools at runtime and decides which ones to use.

Can I use off-the-shelf MCP servers instead of building custom ones?

You can, but most production teams find them insufficient. Off-the-shelf servers often have vague tool descriptions, limited enterprise API coverage, and the security issues described above. For internal tools or developer workflows, community servers can work well. For customer-facing AI features that touch enterprise data, custom MCP servers built for your specific use case are typically necessary.