As AI Agents is gaining serious traction to make artificial intelligence not just responsive, but autonomous and proactive, this kind of questions became more frequent between developers and executives.
To help anyone diving into the world of AI Agents, here is a quick tutorial of the most important feature you must understand to connect Agents with external world: Tools, also known as Function Calls.
What are Tools (Function Calls)?
In the context of AI Agents, Tools are external functions or APIs that the agent can “call” to perform specific tasks. Think of them as skills the agent can activate when needed. They’re NOT built into the model itself, but are instead defined by the developer and exposed to the agent.
For example:
- A calculator function
- A file reader
- A web search API
- A custom business logic function
These Tools allow the agent to go beyond text generation and interact with the world in real-time.
How Does It Work?
The figure below illustrates how the Tools/Function Call process works with LLMs:

Step 1: Define Your Tools
Tools are defined essentially by:
- A name
- A description (what it does, so AI can understand when to use it)
- A schema of expected inputs
In the example, suppose we have the following Tools:
- searchOnInternet: Given a search query, it’ll make a search on the Web, and return a list of titles and descriptions from the Internet.
- addNumbers: Given two numbers, it’ll return sum the numbers and return the result.
Step 2: Merge User Prompt with Tools
When USER asks for “Summary of the latest news”. Before sending any message to the LLM, our application adds the available Tools to the prompt. Like the example below:
"messages": [{ "type": "user", "content": "Summary of the latest news" }]
"tools": [{
"type": "function",
"function": {
"name": "searchOnInternet",
"description": "Retrieve results of query from internet",
"parameters": {
"type": "object",
"properties": {
"query": { "type": "string" }
}
}
},{
"type": "function",
"function": {
"name": "addNumbers",
"description": "Return the sum of two numbers",
"parameters": {
"type": "object",
"properties": {
"a": { "type": "number" },
"b": { "type": "number" }
}
}
}
]
Step 3: Let LLM Decide
With the prompt above, we do let LLM know the available Tools, and infer the best answer for the USER prompt. Which in this case, might be asking to call searchOnInternet function with the query “letest+news” to have more context. The response looks like:
"message": {
"role": "assistant",
"tool_calls": [{
"type": "function",
"function": {
"name": "searchOnInternet",
"arguments": "{\n\"query\": \"latest+news"\n}"
}
}]
}
Step 4: Execute the Tool and Send Result
With the “tool_call” in the LLM response, your application should parse the arguments and execute the requested Tool (searchOnInternet). After executing the Tool, the application should send the conversation history with the result to the LLM again.
Step 5: LLM Final Answer
Now, with the conversation history, and the result of the Tool call, if LLM understand it has enough context to answer the USER request, it will generate then the final response:
"message": {
"role": "assistant",
"content": "Sure! Here is a summary of the latest news from the internet..."
}
Implementing it With Langchain4J
Implementing all these steps from scratch can be cumbersome. Fortunately, you can use libraries like Langchain4J to handle the details and streamline the process. Here is a simple implementation for the example:
import dev.langchain4j.agent.tool.Tool;
import dev.langchain4j.model.openai.OpenAiChatModel;
import dev.langchain4j.service.AiServices;
public class ToolsExample {
interface ToolsAssistant {
String chat(String message);
}
static class MyTools {
@Tool("Retrieve results of query from internet")
public String searchOnInternet(String query) {
return "List of internet results for " + query;
}
@Tool("Return the sum of two numbers")
public Double addNumbers(Double a, Double b) {
return a + b;
}
}
public static void main(String[] args) throws Exception {
var model = OpenAiChatModel.builder()
.modelName("gpt-4o-mini")
.apiKey("demo")
.build())
var toolsAssistant = AiServices.builder(ToolsAssistant.class)
.chatLanguageModel(model)
.tools(new MyTools())
.build();
var finalResponse = toolsAssistant.chat("Summary of the latest news");
System.out.println(finalResponse);
}
}
Conclusion
Tools (Function Calls) are the bridge between the language model’s intelligence and real-world utility, and understanding how it works before diving into the world of Agents and Agentic AI, is crucial. Without tools, an agent is just a chatbot. With tools, it becomes a powerful, proactive assistant capable of taking meaningful actions on your behalf.
Now that you understand how the process works under the hood, it’s easier to see why solutions like Anthropic’s Model Context Protocol (MCP) are gaining traction. In the enterprise world, you wouldn’t want to redefine tool descriptions for every system you integrate with. Having a standard improves maintainability and reusability. But let’s save that topic for the next article.
If you’d like to see more examples using Java and Langchain4J,, I’ve published a repository with runnable samples on my Github. Feel free to check it out—and if you find it helpful, consider giving it a star! 😊
What has been your biggest challenges when implementing AI Agents? I’m excited to hear your stories and insights in the comments below!