Most of the value lies in its differentiation to OpenAPI and the conventions it brings.
By providing an MCP endpoint you signify "we made the API self-describing enough to be usable by AI agents". Most existing OpenAPI specs out there don't clear that bar, as endpoint/parameter descriptions are underdocumented and are unusable without supplementary documentation that is external to the OpenAPI spec.
Okay... you are tasked with integrating every rest api exposed from Amazon into vscode chatbot. It's easy to do with rest api so how long will it take you to configure that?
That's again apples to oranges. The AWS API was not made to be LLM-friendly.
The apples to apples comparison would be this:
A:
- Assume that AWS exposes an LLM-oriented OpenAPI spec.
- Take a preexisting OpenAPI client with support for reflection.
- Write the plumbing to go between agent tool calls and OpenAPI calls. Schema from OpenAPI becomes schema for tool calls.
- You use a preexisting OpenAPI client library, AWS can use a preexisting OpenAPI server library.
B:
- Assume that AWS exposes an MCP server.
- Program an MCP client.
- Write the plumbing to go between agent tool calls and MCP calls. Schema from MCP becomes schema for tool calls.
- You had to program an MCP client, AWS had to program an MCP server. Where as OpenAPI existed before the concept of agent tool calls, MCP did not.
That's why I said MCP is a lot of duplicate engineering effort for seemingly no gain. Preexisting API mechanisms can be used to provide LLM-oriented APIs, that's orthogonal to MCP-as-a-protocol. MCP is quite ugly as a protocol, and has very little reason to exist.
MCP includes a standard for some more advanced capabilities like:
- tool discovery: for instance you can "push" an update from the server to the client, while with OpenAPI you have to wait for the client to refetch the schema
- background tasks: you can have a job endpoint in your API to submit tasks and check their status, but standardization on the way to do that brings additional possibilities on the client side (imagine showing a standard progresss bar no matter which tool is being used)
- streaming / incremental results / cancellation
- ...
All of this is http based and could be implemented on a bespoke API but the challenge is cross-API standardization so that agents can be trained on representative data. The value of MCP is that it creates a common behavioral contract, not just a transport or schema.
By providing an MCP endpoint you signify "we made the API self-describing enough to be usable by AI agents". Most existing OpenAPI specs out there don't clear that bar, as endpoint/parameter descriptions are underdocumented and are unusable without supplementary documentation that is external to the OpenAPI spec.