Building MCP Servers for Enterprise AI
Enterprise AI has an integration problem. Not a model problem — the models are good enough. Not a data problem — most companies have more data than they know what to do with. The problem is connecting AI to the systems that actually run the business. Every new AI application needs access to your ERP, your CRM, your document storage, your internal APIs. And every one of those connections used to mean custom code.
If you have three AI applications and five backend systems, that is fifteen unique integrations. Add a sixth system? Three more. Swap out a model provider? Rebuild everything. This is the N×M integration problem, and it has been quietly killing enterprise AI projects for years.
The Model Context Protocol (MCP) changes that equation. Originally released by Anthropic as an open-source standard and now governed by an open specification, MCP provides a universal interface between AI applications and external systems. Build an MCP server for your ERP once, and any MCP-compatible AI host — Claude, ChatGPT, VS Code, your own custom agent — can connect to it. The N×M problem becomes N+M.
Think of it like USB-C for AI. Before USB-C, every device had its own connector. MCP gives AI integrations that same standardization: one protocol, universal connectivity.
The MCP Architecture
MCP follows a client-server architecture with four key participants and two communication layers.
Hosts, Clients, and Servers
An MCP host is any AI application that needs to talk to external systems — Claude Desktop, VS Code with Copilot, or your own LangGraph-based agent. The host creates one MCP client for each server it connects to. The client manages the connection lifecycle: initialization, capability negotiation, and the ongoing request-response exchange. An MCP server is the program that exposes your enterprise system's capabilities to the AI world. It is the bridge between your SAP instance and any AI agent that needs to query it.
This separation matters. The host does not need to know anything about how your database works. The server does not need to know which AI model is asking questions. The client is the intermediary that speaks both languages.
Transports: Local and Remote
MCP supports two transport mechanisms. The stdio transport launches the server as a subprocess — the client writes JSON-RPC messages to stdin and reads responses from stdout. Zero network overhead, ideal for local tools. The Streamable HTTP transport runs the server as an independent HTTP service that can handle multiple client connections, using POST requests for client-to-server messages and optional Server-Sent Events for streaming responses. This is the transport you will use for enterprise deployments — it scales, it supports standard HTTP authentication, and it can sit behind your existing API gateway.
Under the hood, both transports use the same JSON-RPC 2.0 message format. Your server code does not change when you switch transports — only the wiring does.
The Three Primitives: Resources, Tools, and Prompts
Everything an MCP server can do falls into three categories. Understanding these primitives is the key to designing a good server.
Resources: Passive Data for Context
Resources are read-only data sources that provide context to the AI application. They are controlled by the application, not the model — the host decides which resources to fetch and when to include them in the conversation. Each resource has a unique URI and a declared MIME type.
Resources come in two flavors: direct resources with fixed URIs (like erp://schema/orders) and resource templates with parameters (like crm://contacts/{department}). In an enterprise context, resources are how you expose database schemas, configuration documents, knowledge base articles, or any reference data the AI needs for grounding.
Tools: Actions the Model Can Take
Tools are executable functions that the AI model can invoke. Unlike resources, tools are model-controlled — the LLM decides when to call them based on the user's request. Each tool has a name, a description, and a JSON Schema that defines its input parameters.
This is where things get powerful — and where enterprise concerns become critical. A tool might create an order in SAP, update a contact in Salesforce, or approve a purchase request. Tools are actions with consequences, which is why MCP emphasizes human oversight: approval dialogs, permission settings, and activity logs are all part of the design.
Prompts: Reusable Interaction Templates
Prompts are pre-built instruction templates that guide the AI model on how to work with specific tools and resources. They are user-controlled — a user selects a prompt to structure the interaction. Think of prompts as recipes: "Summarize this quarter's sales pipeline" or "Draft a purchase order based on this requisition." They combine system instructions, few-shot examples, and references to available tools and resources into a coherent workflow.
For enterprise use, prompts are quietly important. They encode institutional knowledge — the specific way your organization queries the data warehouse, the format compliance expects for audit reports, the steps involved in a procurement workflow. They turn tribal knowledge into reusable AI instructions.
Building an MCP Server: A Conceptual Walkthrough
Let us walk through what building an MCP server looks like in practice. Say you want to give AI agents access to your company's order management system.
Step 1: Define your primitives. Start by mapping your system's capabilities to MCP primitives. What data should the AI be able to read? Those are resources. What actions should it be able to take? Those are tools. What workflows should it support? Those are prompts.
For an order management system, you might expose:
- Resources: Order schema, product catalog, customer segments, warehouse inventory levels
- Tools: Search orders, create order, update order status, generate invoice
- Prompts: "Process a return" workflow, "Quarterly order analysis" template, "Supplier reorder" checklist
Step 2: Choose your SDK and transport. The official MCP SDKs are available in TypeScript and Python. The TypeScript SDK (@modelcontextprotocol/server) ships with middleware packages for Express, Hono, and raw Node.js HTTP — making it straightforward to integrate into existing backend stacks. For enterprise deployments, you will almost certainly use the Streamable HTTP transport so the server can run as a standalone service behind your infrastructure.
Step 3: Implement the handlers. Each primitive type has standardized methods: tools/list and tools/call for tools, resources/list and resources/read for resources, prompts/list and prompts/get for prompts. Your server registers handlers for these methods. When a client calls tools/list, your server returns the catalog of available tools with their JSON Schema definitions. When it calls tools/call with specific parameters, your server executes the business logic — querying your database, calling an internal API, whatever the operation requires — and returns the result.
Step 4: Handle lifecycle management. MCP is a stateful protocol. Clients and servers negotiate capabilities during initialization — your server declares which primitives it supports, the client declares what it can handle. This capability negotiation means servers can evolve without breaking older clients, and clients can adapt to servers with different feature sets.
The SDKs abstract away most of the protocol plumbing. Your job is defining what your system can do and writing the business logic that makes it happen.
Enterprise Considerations: Where It Gets Serious
Building a demo MCP server takes an afternoon. Building one that is production-ready for an enterprise environment takes significantly more thought. Here is what separates a proof-of-concept from a system your CISO will approve.
Authentication and Authorization
The MCP specification includes a full OAuth 2.1-based authorization framework for HTTP transports. Servers can require bearer tokens, support dynamic client registration (so new AI applications can connect without manual setup), and integrate with your existing identity provider. The specification supports both authorization code grants (when a human user is in the loop) and client credentials grants (for service-to-service communication).
But authentication is just the start. In production, you need fine-grained authorization: which user can access which tools, which resources are available to which roles, what happens when someone tries to call a tool they should not have access to. This is where your MCP server needs to integrate with your existing RBAC or ABAC system, not reinvent one.
Audit Trails and Observability
Every tool invocation through your MCP server is an action taken on a production system. You need to log who called what, when, with which parameters, and what the result was. This is not just good practice — it is a regulatory requirement under frameworks like the EU AI Act and SOC 2. Your MCP server should emit structured logs and traces (OpenTelemetry is the natural choice) that feed into your existing observability stack.
Permission Boundaries and Human Oversight
MCP is designed with human oversight in mind. Tools can require explicit user approval before execution. You can categorize tools by risk level — read-only queries execute automatically, while write operations require confirmation. For high-stakes actions (approving a purchase order, modifying customer data), you can implement multi-step approval workflows that pause execution until a human signs off. This is not a limitation of the protocol; it is a feature that makes enterprise adoption realistic.
Data Filtering and PII Protection
Your MCP server sits between the AI model and your data. That makes it the natural place to implement data governance: redacting PII before it reaches the model, filtering results based on the user's clearance level, and ensuring sensitive fields never leave your infrastructure. If your AI host sends requests to a cloud-based model, the MCP server is your last line of defense for data sovereignty.
MCP vs Custom API Integrations
You might be thinking: why not just build a REST API and have the AI call it? You can. People do. But there are real costs to that approach.
Custom integrations are bespoke by definition. Each one has its own authentication scheme, its own error handling, its own way of describing capabilities to the model. When you switch AI providers or add a new AI application, you start from scratch. There is no discoverability — the AI does not know what your API can do until you manually describe it in a system prompt or tool definition.
MCP gives you standardized capability discovery (clients call tools/list, resources/list, prompts/list), standardized execution (tools/call with JSON Schema validation), standardized lifecycle management, and standardized auth. Your tooling — debuggers like the MCP Inspector, client libraries, observability integrations — works across all your MCP servers. You write it once, test it once, and it works with any compliant host.
The trade-off is that MCP adds a layer of abstraction. For a single, tightly coupled integration, a direct API call is simpler. But the moment you have more than one AI application, more than one backend system, or any expectation that either side will change — MCP pays for itself quickly.
Real-World Use Cases
Where does this land in practice? Here are the patterns we see most often in enterprise environments.
ERP Integration. An MCP server wrapping SAP or Microsoft Dynamics lets AI agents query order status, check inventory levels, and create purchase orders — all through natural language. The server exposes the ERP's database schema as a resource, order operations as tools, and common workflows like "expedite a delayed shipment" as prompts. Finance teams interact with complex ERP data without learning the ERP's interface.
CRM and Sales Intelligence. A Salesforce or HubSpot MCP server gives AI agents access to the entire customer relationship lifecycle. Resources expose pipeline data and customer interaction history. Tools enable contact updates, opportunity creation, and meeting scheduling. Prompts template common analyses like "prepare for a quarterly business review with customer X."
Document Management and Knowledge Bases. SharePoint, Confluence, or custom document repositories become AI-accessible through MCP servers that expose documents as resources (with full-text search as a tool) and common workflows as prompts. The server handles permission mapping — ensuring the AI can only access documents the requesting user is authorized to see.
Internal Tooling and DevOps. MCP servers for GitHub, Jira, or CI/CD platforms let engineering teams use AI to triage issues, review pull requests, or debug deployment failures — with the AI having direct access to logs, metrics, and code repositories through standardized interfaces.
How Laava Builds Production MCP Servers
At Laava, MCP server development fits naturally into our three-layer architecture for enterprise AI. The MCP server lives in the Action layer — it is the integration point where AI reasoning meets real-world systems. But a good MCP server also needs the Context layer (what data to expose, how to structure it) and the Reasoning layer (which prompts to offer, how tools compose into workflows).
Our approach is deliberately boring. We use TypeScript for the server code because it matches our stack and the ecosystem of enterprise backend systems we integrate with. We deploy on Kubernetes because MCP servers need to be scalable, observable, and independently deployable. We wire OAuth 2.1 into the customer's existing identity provider — no shadow auth systems. We instrument everything with OpenTelemetry so every tool call shows up in the customer's observability platform.
We also build MCP servers to be model-agnostic by default. The same server works whether the host is running Claude, GPT-4, Llama, or a fine-tuned domain model. That is the entire point of a standard protocol — and it aligns with our broader conviction that enterprise AI architecture should never be locked to a single vendor.
A typical engagement starts with a four-week Proof of Pilot: we take one enterprise system, build an MCP server for it, connect it to the customer's AI platform of choice, and prove value in production — not in a sandbox. By the end, the customer has a working integration and a clear picture of what scaling to additional systems looks like.
Looking Ahead
MCP is still young, but the trajectory is clear. The ecosystem is growing rapidly — major AI platforms are adopting it as a standard, the specification is maturing with enterprise-grade auth and transport mechanisms, and the community of open-source MCP servers covers an expanding range of systems. The organizations that build their integration layer on MCP today will have a significant head start as AI agents become the primary interface to enterprise software.
The integration layer does not have to be the place where AI projects go to die. With MCP, it can be the place where they come to life.
If you are looking to connect your enterprise systems to AI through MCP, we can help. Learn more about our MCP Server Development services or get in touch to discuss your specific integration challenges.
