The Vocabulary Words Pertinent to MCP

Anthropic’s Model Context Protocol has generated a lot of press. It serves as the bridge between AI applications and external apps, agents, or data. Anthropic’s flagship analogy is that MCP serves as a USB-C for AI, making it easy to connect accessories to the core thing. The protocol is designed to be agnostic about which LLM or application is involved. To support MCP, the application leads by spinning up an MCP server that acts as a bridge between itself and the agentic system.

However, there is a lot of confusion about the various parts and vocabulary words relevant to MCP. This is partially because these words are common words: an AI tool and a tool in MCP are distinct things, where both terms might find themselves in the same conversation.

Today, we want to explain the main vocabulary words relevant to MCP with explicit definitions to help clear the confusion.

Tools

Tools” are a common misnomer in the context of MCP, where they refer to the specific functionalities that the agentic system can take. Tools are not complete external applications, like Salesforce and Snowflake, but rather granular operations within these tools.

The best way to explain tools is with an example:

Imagine a hypothetical Salesforce MCP server. The agentic system might have access to tools such as “create_new_opportunity()” or “update_customer_note().”

Tools are akin to a menu of options that the agent gets exposed to. They are a corollary of SDK functions or API routes. Once exposed to the MCP spec, an agent can dynamically determine when to invoke a tool and which tools to invoke in a specific order to accomplish a particular task. This invocation is called an action, which will be clarified below.

Resources

A resource allows the AI agent to pull in specific files from the integrated application. Again, we’ll defer to an example to highlight this:

A Google Drive MCP server could allow an LLM or an AI agent to pull documents from the drive. The selection of a particular resource can be done manually (in the case of an LLM interface) or automatically (in the case of an agent).

MCP also allows for the AI agent to dynamically pull resources in response to a more vague user query. This can look like, “Find all of the sales decks that our reps are using and flag for me any discrepancies between them.” Resources are distinct from tools because they’re static files. Resources are similar to how an API might pair with a CDN to fetch static assets.

Prompts

An MCP prompt is identical to an LLM prompt; they are just pre-written to assist in specific situations. Rather than having to manually craft the perfect prompt, the MCP server provides them out of the box.

What’s an example of a pre-written prompt that’s useful?

Let’s take the example of Salesforce again. They could expose a “Summarize Account” prompt. This prompt has been handcrafted by Salesforce to provide good results for that given task. It goes into deep detail about what each element of data from Salesforce means and what a good account summary looks like.

Prompts are especially helpful if an AI application might otherwise misinterpret an application’s unique data conventions.

Actions

Actions are invocations of an MCP tool or set of tools by an AI agent. Actions happen at runtime and are dynamic—they produce results that the AI agent is able to reason over. An action is the equivalent of an HTTP query. The networks analogy is particularly strong: While API routes are static, queries have headers, a payload, and generate a response.

For example, an action may manifest itself as (“Call search_web(q='nvidia earnings Q2 2025')”). in the AI agent’s execution.

Give your team agents to get work done anywhere

Credal gives you everything you need to supercharge your business using generative AI, securely.

Ready to dive in?

Get a demo