Fetching latest headlines…
Why Connecting AI to Real Systems Is Still Hard
NORTH AMERICA
πŸ‡ΊπŸ‡Έ United Statesβ€’March 22, 2026

Why Connecting AI to Real Systems Is Still Hard

1 views0 likes0 comments
Originally published byDev.to

Part 1 of 6 β€” MCP Article Series

The models themselves work well. For anything self-contained β€” writing, summarising, generating code β€” they are genuinely capable.

But the moment you connect an AI model to your actual systems β€” your order database, your payment gateway, your CRM β€” something changes. The model is capable. The integration is not.

Every connection has to be built by hand. Every system has different authentication, different error formats, different versioning rules. And when something breaks β€” which it does every time an API updates β€” a developer has to fix it.

This is the problem sitting quietly underneath most AI projects. It is not about the model. It is about everything the model needs to reach before it can do real work.

Five AI applications. Three system integrations each. That is 15 integrations total.

At a reasonable estimate of around forty hours per integration, that is roughly six hundred hours of engineering effort.

Not to build new features. Not to ship a product. Not to do anything a customer would ever notice.

Just to keep the wiring connected.

That invisible cost is what we will call the integration tax. Every AI team pays it. Most never stop to measure it.

And with every new AI model your team evaluates, that tax compounds.

This is what that pattern looks like in a real architecture:

![The NΓ—M Integration Problem β€” every AI application connects to every system with custom code]

Every line in that diagram represents integration work a team still has to build, test, and maintain.

The NΓ—M Problem

Even a smaller slice of that work adds up quickly: hundreds of development hours before a single feature ships.

Every new system multiplies the effort instead of adding to it. That is the NΓ—M problem.

What This Looks Like in Practice

Here is what this pattern looks like when a team actually tries to build with AI.

An eCommerce company builds an AI assistant. Customers can ask about orders, get shipping estimates, check return policies. The product team is excited. The demo works. Leadership approves the project.

Then engineering starts building. Each system the AI needs requires its own integration:

  • Order system: custom connector, OAuth2 flow, database query logic, response parsing
  • Payment gateway: Stripe API client, different auth scheme, different error format
  • Shipping API: third-party integration, webhook handling, carrier-specific responses
  • Inventory system: internal API, separate credentials, different data structure
  • CRM: Salesforce or HubSpot connector, yet another auth pattern

For the developer: weeks of integration work before a single user-facing feature ships.

For the product manager: a timeline that keeps slipping because of infrastructure work that was never on the roadmap.

For leadership: an AI investment that takes months to show any return, and breaks quietly whenever a vendor updates their API.

When the team builds a second AI, they write all five connectors again. A third AI β€” a third time. The same database, the same Stripe account, integrated separately by three teams who each had to figure it out from scratch.

Why AI Makes This Worse, Not Better

Integration complexity is not new. But AI adds two limitations that make the problem harder to solve.

The first is frozen knowledge. Every AI model is trained on data up to a certain date. After that, it knows nothing new.

Ask it about your live order status, your current inventory, or whether a payment cleared ten minutes ago β€” and it either admits it does not know, or gives a plausible answer that is wrong.

This matters for everyone: the developer building a status-check feature, the product manager demoing to a client, the salesperson promising real-time AI insights.

The second is that AI cannot act. Even if an AI could see your live data, it cannot do anything with it unless a developer has written the code first.

It cannot execute a query, trigger a refund, or update a shipping status. The intelligence is real. The gap between understanding and executing is also real.

What this means in practice

A customer asks: "Where is my order?" The AI understands the question. It can reason about shipping delays and delivery windows. But it cannot check your actual order database, and it cannot give a real answer without custom integration code.

This Is Not a Skill Problem

The engineers rebuilding these integrations are not doing it wrong. They are doing the only thing the current ecosystem allows.

There is no standard way for an AI to discover what capabilities a system exposes. There is no shared protocol for how AI should request data or trigger actions. So every team invents their own approach, and the work compounds with every new model, every new system, every new team.

The problem is not skill. It is the absence of a standard.

But We Already Have REST APIs β€” Why Not Use Those?

If you have been building software for more than a year, this question is forming. REST APIs are mature, battle-tested, and already used by every system on that list. Why introduce something new?

REST was designed for human developers

REST APIs were designed for a specific workflow: a developer reads the documentation, understands the endpoints, and writes client code that calls specific URLs with specific parameters.

That process works well. But it requires a human doing the thinking.

REST APIs expose fixed endpoints. Tools like OpenAPI document them well β€” but for developers at build time, not for an AI agent negotiating capabilities at runtime. The client has to be written with explicit knowledge of what the server offers β€” upfront, before execution.

AI agents work differently

An AI agent does not read documentation. It reasons about what it needs mid-task, dynamically β€” and it needs to discover capabilities at runtime. REST was not designed for that.

REST requires a developer to read the docs, hardcode the connections, and update them when things change. There is no standard way for an AI agent to connect to a system, ask what it can do, and be notified when capabilities change β€” all at runtime, without a developer in the middle.

The key distinction

  • REST API: developer reads docs, writes code, hardcodes the connection
  • MCP: system describes its own capabilities, AI discovers and uses them at runtime

REST is not going away. Your Stripe integration will still use the Stripe REST API. MCP sits above that layer as the coordination protocol AI agents can reason about. Part 4 covers this comparison in full.

The diagram below shows the same systems and the same AI β€” but with a standard interface layer in between. The connection count drops from fifteen to eight.

![Before vs After: Same systems, connected through a standard interface layer instead of NΓ—M custom integrations]

On the left: 15 custom integrations, each built separately. On the right: 8 connections β€” each AI application and each system capability connects through the layer once.

The systems did not change. Only the way they connect did.

A Better Way Exists

What if each system β€” your order database, your payment gateway, your shipping API β€” could describe what it does in a standard, discoverable way?

Any AI application could find those capabilities and use them β€” without custom code written for each combination, without re-reading documentation when something changes, without rebuilding the same connector six months later for a different model.

For developers: write the integration once, any AI can use it.

For product and engineering leaders: AI features ship faster because the infrastructure already exists.

For the business: the integration tax stops compounding every time you add a new model.

That is not a theoretical idea. It is an open protocol that already exists β€” and it is changing how engineering teams build AI systems.

It is called the Model Context Protocol β€” MCP.

Part 2 defines exactly what MCP is, how it works as a universal connector for AI systems, and why it solves a problem that REST was never designed to address.

Key Takeaways

  • The NΓ—M problem compounds silently. Ten systems and five AI applications mean fifty integrations β€” not ten. Every new AI application or system multiplies the maintenance burden.
  • AI has two distinct limitations. Frozen knowledge means it cannot see your live data. No ability to act means it cannot do anything with that data even if it could see it.
  • The real problem is standardization. REST APIs solve communication for human developers, but AI agents need a different layer for discovering and using capabilities at runtime.

MCP Article Series Β· Part 1 of 6
Next: What MCP Is β€” the universal connector for AI

Comments (0)

Sign in to join the discussion

Be the first to comment!