We all know LLMs can write emails, summarize documents, and even generate code. But on their own they can’t do anything. They don’t run commands, update systems, or create resources—unless we give them the means to act.
That’s where APIs come in.
Just as data became the new oil, APIs— those an AI can call—are becoming the new fuel.
From Text Generation to Real-World Action
Large language models (LLMs) like GPT-4 and Claude are great at working with language. They excel at transforming inputs into outputs: writing documentation, explaining code, creating outlines, even answering tough technical questions.
But these models are isolated. By default, they don’t interact with the outside world. To bridge that gap, we give them tools—functions they can call. This capability of an LLM is known as function calling, and it opens the door to letting AI actually do something meaningful in your systems.
APIs: The Real Interface of Intelligence
APIs have been core to software development for decades. Whether it’s a REST endpoint, a local library, or a SDK, APIs expose functionality to other code.
When an AI is made aware of those APIs—typically through a short schema or function signature—it gains the ability to make decisions and take action. It’s not that you must invent new APIs for AI. Exposing existing ones as tools the AI can reason about and invoke is where the magic happens.
Azure Copilot
Take Azure Copilot, Microsoft’s AI assistant embedded in the Azure portal or right inside VS Code. It doesn’t just answer questions—it executes them.
You can ask things like:
- “What virtual machines are running in production?”
- “What’s my projected spend next month?”
- “Create a new VM with 2 cores and 8GB RAM.”
Copilot can respond and act because Microsoft exposed the Azure API surface as functions that Copilot understands. These APIs already existed—used by the Azure CLI, SDKs, PowerShell modules—but now they’re AI-callable. And the AI decides when and how to use them based on your natural language prompts.
Show Me the Code
Let’s look at a lightweight example. In the Mastra.ai framework, AI tools are just TypeScript functions that perform an action like wrapping an existing API or database operation.
Here’s a tool that lets the AI read a list of to-dos:
And here’s one that lets the AI create a new to-do:
Each function wraps existing logic—like a database call—and gives the AI a structured, safe way to invoke it. The AI doesn’t need full access to your codebase. It just needs a well-defined tool and a clear description of what it does.
From Reporting to Acting
This is the leap: AI tools are no longer just about surfacing information. They’re about enabling action.
- Reading data: “What’s in the database?”
- Writing data: “Add a new task to the list.”
- Taking real-world actions: “Spin up a new environment.”
Every time you expose an API function to the AI, you’re expanding what it’s capable of. And often, all the hard work—the APIs, the data models, the logic—already exists.
What’s Next?
As developers, we’re sitting on a goldmine of callable functionality — everything from internal microservices to public cloud APIs. All it takes is a little scaffolding to make those APIs AI-aware.
Once wrapped and described, these APIs become part of the AI’s toolkit. They allow it to move from passive assistant to active agent.
So, here’s the question:
What could your APIs do—if an AI could call them?
Note
Want to see this in action? Check out the open-source Mastra Todo project to explore how tools are defined and used with LLMs.