• About Us
  • Privacy Policy
  • Disclaimer
  • Contact Us
TechTrendFeed
  • Home
  • Tech News
  • Cybersecurity
  • Software
  • Gaming
  • Machine Learning
  • Smart Home & IoT
No Result
View All Result
  • Home
  • Tech News
  • Cybersecurity
  • Software
  • Gaming
  • Machine Learning
  • Smart Home & IoT
No Result
View All Result
TechTrendFeed
No Result
View All Result

Methods to Construct An AI Agent with Operate Calling and GPT-5

Admin by Admin
October 21, 2025
Home Machine Learning
Share on FacebookShare on Twitter


and Giant Language Fashions (LLMs)

Giant language fashions (LLMs) are superior AI programs constructed on deep neural community comparable to transformers and educated on huge quantities of textual content to generate human-like language. LLMs like ChatGPT, Claude, Gemini and Grok can deal with many difficult duties and are used throughout fields comparable to science, healthcare, training, and finance.

An AI agent extends the capabilites of LLMs to unravel duties which can be past their pre-trained data. An LLM can write a Python tutorial from what it discovered throughout coaching. When you ask it to ebook a flight, the duty requires entry to your calendar, net search and the flexibility to take actions, these fall past the LLM’s pre-trained data. A few of the frequent actions embrace:

  • Climate forecast: The LLM connects to an online search software to fetch the newest climate forecast.
  • Reserving agent: An AI agent that may verify a consumer’s calendar, search the net to go to a reserving web site like Expedia to search out out there choices for flights and accommodations, current them to the consumer for affirmation, and full the reserving on behalf of the consumer.

How an AI Agent Works

AI brokers type a system that makes use of a Giant Language Mannequin to plan, purpose, and take steps to work together with its atmosphere utilizing instruments prompt from the mannequin’s reasoning to unravel a selected process.

Fundamental Construction of an AI Agent

Picture Generated By Gemini
  • A Giant Language Mannequin (LLM): the LLM is the mind of an AI agent. It takes a consumer’s immediate, plans and causes via the request and breaks the issue into steps that decide which instruments it ought to use to finish the duty.
  • A software is the framework that the agent makes use of to carry out an motion primarily based on the plan and reasoning from the Giant Language Mannequin. When you ask an LLM to ebook a desk for you at a restaurant, doable instruments that can be used embrace calendar to verify your availability and an online search software to entry the restaurant web site and make a reservation for you.

Ilustrated Resolution Making of a Reserving AI Agent

Picture Generated By ChatGPT

AI brokers can entry completely different instruments relying on the duty. A software is likely to be an information retailer, comparable to a database. For instance, a customer-support agent might entry a buyer’s account particulars and buy historical past and determine when to retrieve that data to assist resolve a difficulty.

AI brokers are used to unravel a variety of duties, and there are numerous highly effective brokers out there. Coding brokers, significantly agentic IDEs comparable to Cursor, Windsurf, and GitHub Copilot assist engineers write and debug code quicker and construct tasks rapidly. CLI Coding brokers like Claude Code and Codex CLI can work together with a consumer’s desktop and terminal to hold out coding duties. ChatGPT helps brokers that may carry out actions comparable to reserving reservations on a consumer’s behalf. Brokers are additionally built-in into buyer help workflows to speak with clients and resolve their points.

Operate Calling

Operate calling is a way for connecting a big language mannequin (LLM) to exterior instruments comparable to APIs or databases. It’s utilized in creating AI brokers to attach LLMs to instruments. In perform calling, every software is outlined as a code perform (for instance, a climate API to fetch the newest forecast) together with a JSON Schema that specifies the perform’s parameters and instructs the LLM on when and name the perform for a given process.

The kind of perform outlined is dependent upon the duty the agent is designed to carry out. For instance, for a buyer help agent we will outline a perform that may extract data from unstructured information, comparable to PDFs containing particulars a couple of enterprise’s merchandise.

On this submit I’ll exhibit use perform calling to construct a easy net search agent utilizing GPT-5 as the massive language mannequin.

Fundamental Construction of a Internet Search Agent

Picture Generated By Gemini

The primary logic behind the net search agent:

  • Outline a code perform to deal with the net search.
  • Outline customized directions that information the massive language mannequin in figuring out when to name the net search perform primarily based on the question. For instance, if the question asks concerning the present climate, the net search agent will acknowledge the necessity to search the web to get the newest climate reviews. Nonetheless, if the question asks it to jot down a tutorial a couple of programming language like Python, one thing it will possibly reply from its pre-trained data it won’t name the net search perform and can reply immediately as an alternative.

Prerequisite

Create an OpenAI account and generate an API key
1: Create an OpenAI Account when you don’t have one
2: Generate an API Key

Arrange and Activate Surroundings

python3 -m venv env
supply env/bin/activate

Export OpenAI API Key

export OPENAI_API_KEY="Your Openai API Key"

Setup Tavily for Internet Search
Tavily is a specialised web-search software for AI brokers. Create an account on Tavily.com, and as soon as your profile is ready up, an API key can be generated that you would be able to copy into your atmosphere. New accounta obtain 1000 free credit that can be utilized for as much as 1000 net searches.

Export TAVILY API Key

export TAVILY_API_KEY="Your Tavily API Key"

Set up Packages

pip3 set up openai
pip3 set up tavily-python

Constructing a Internet Search Agent with Operate Calling Step by Step

Step 1: Create Internet Search Operate with Tavily

An online search perform is applied utilizing Tavily, serving because the software for perform calling within the net search agent.

from tavily import TavilyClient
import os

tavily = TavilyClient(api_key=os.getenv("TAVILY_API_KEY"))

def web_search(question: str, num_results: int = 10):
    strive:
        consequence = tavily.search(
            question=question,
            search_depth="primary",
            max_results=num_results,
            include_answer=False,       
            include_raw_content=False,
            include_images=False
        )

        outcomes = consequence.get("outcomes", [])

        return {
            "question": question,
            "outcomes": outcomes, 
            "sources": [
                {"title": r.get("title", ""), "url": r.get("url", "")}
                for r in results
            ]
        }

    besides Exception as e:
        return {
            "error": f"Search error: {e}",
            "question": question,
            "outcomes": [],
            "sources": [],
        }

Internet perform code breakdown

Tavily is initialized with its API key. Within the web_search perform, the next steps are carried out:

  • Tavily search perform is known as to look the web and retrieve the highest 10 outcomes.
  • The search outcomes and their corresponding sources are returned.

This returned output will function related context for the net search agent: which we are going to outline later on this article, to fetch up-to-date data for queries (prompts) that require real-time information comparable to climate forecasts.

Step 2: Create Software Schema

The software schema defines customized directions for an AI mannequin on when it ought to name a software, on this case the software that can be utilized in an online search perform. It additionally specifies the situations and actions to be taken when the mannequin calls a software. A json software schema is outlined under primarily based on the OpenAI software schema construction.

tool_schema = [
    {
        "type": "function",
        "name": "web_search",

        "description": """Execute a web search to fetch up to date information. Synthesize a concise, 
        self-contained answer from the content of the results of the visited pages.
        Fetch pages, extract text, and provide the best available result while citing 1-3 sources (title + URL). 
        If sources conflict, surface the uncertainty and prefer the most recent evidence.
        """,

        "strict": True,
        "parameters": {
            "type": "object",
            "properties": {
                "query": {
                    "type": "string",
                    "description": "Query to be searched on the web.",
                },
            },
            "required": ["query"],
            "additionalProperties": False
        },
    },
]

Software schema’s Properties

  • kind: Specifies that the kind of software is a perform.
  • identify: the identify of the perform that can be used for software name, which is web_search.
  • description: Describes what the AI mannequin ought to do when calling the net search software. It instructs the mannequin to look the web utilizing the web_search perform to fetch up-to-date data and extract related particulars to generate the most effective response.
  • strict: It’s set to true, this property instructs the LLM to strictly comply with the software schema’s directions.
  • parameters: Defines the parameters that can be handed into the web_search perform. On this case, there is just one parameter: question which represents the search time period to search for on the web.
  • required: Instructs the LLM that question is a compulsory parameter for the web_search perform.
  • additionalProperties: it’s set to false, that means that the software’s arguments object can’t embrace any parameters apart from these outlined below parameters.properties.

Step 3: Create the Internet Search Agent Utilizing GPT-5 and Operate Calling

Lastly I’ll construct an agent that we will chat with, which might search the net when it wants up-to-date data. I’ll use GPT-5-mini, a quick and correct mannequin from OpenAI, together with perform calling to invoke the software schema and the net search perform already outlined.

from datetime import datetime, timezone
import json
from openai import OpenAI
import os 

consumer = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))

# tracker for the final mannequin's response id to take care of dialog's state 
prev_response_id = None

# an inventory for storing software's outcomes from the perform name 
tool_results = []

whereas True:
    # if the software outcomes is empty immediate message 
    if len(tool_results) == 0:
        user_message = enter("Person: ")

        """ instructions for exiting chat """
        if isinstance(user_message, str) and user_message.strip().decrease() in {"exit", "q"}:
            print("Exiting chat. Goodbye!")
            break

    else:
        user_message = tool_results.copy()
    
        # clear the software outcomes for the subsequent name 
        tool_results = []

    # get hold of present's date to be handed into the mannequin as an instruction to help in resolution making
    today_date = datetime.now(timezone.utc).date().isoformat()     

    response = consumer.responses.create(
        mannequin = "gpt-5-mini",
        enter = user_message,
        directions=f"Present date is {today_date}.",
        instruments = tool_schema,
        previous_response_id=prev_response_id,
        textual content = {"verbosity": "low"},
        reasoning={
            "effort": "low",
        },
        retailer=True,
        )
    
    prev_response_id = response.id

    # Handles mannequin response's output 
    for output in response.output:
        
        if output.kind == "reasoning":
            print("Assistant: ","Reasoning ....")

            for reasoning_summary in output.abstract:
                print("Assistant: ",reasoning_summary)

        elif output.kind == "message":
            for merchandise in output.content material:
                print("Assistant: ",merchandise.textual content)

        elif output.kind == "function_call":
            # get hold of perform identify 
            function_name = globals().get(output.identify)
            # masses perform arguments 
            args = json.masses(output.arguments)
            function_response = function_name(**args)
            tool_results.append(
                {
                    "kind": "function_call_output",
                    "call_id": output.call_id,
                    "output": json.dumps(function_response)
                }
            )

Step by Step Code Breakdown

from openai import OpenAI
import os 

consumer = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
prev_response_id = None
tool_results = []
  • Initialized the OpenAI mannequin API with an API key.
  • Initialized two variables prev_response_id and tool_results. prev_response_id retains observe of the mannequin’s response to take care of dialog state, and tool_results is an inventory that shops outputs returned from the web_search perform name.

The chat runs contained in the loop. A consumer enters a message and the mannequin referred to as with software schema accepts the message, causes over it, decides whether or not to name the net search software, after which the software’s output is handed again to the mannequin. The mannequin generates a context-aware response. This continues till the consumer exits the chat.

Code Walkthrough of the Loop

if len(tool_results) == 0:
    user_message = enter("Person: ")
    if isinstance(user_message, str) and user_message.strip().decrease() in {"exit", "q"}:
        print("Exiting chat. Goodbye!")
        break

else:
    user_message = tool_results.copy()
    tool_results = []

today_date = datetime.now(timezone.utc).date().isoformat()     

response = consumer.responses.create(
    mannequin = "gpt-5-mini",
    enter = user_message,
    directions=f"Present date is {today_date}.",
    instruments = tool_schema,
    previous_response_id=prev_response_id,
    textual content = {"verbosity": "low"},
    reasoning={
        "effort": "low",
    },
    retailer=True,
    )

prev_response_id = response.id
  • Checks if the tool_results is empty. Whether it is, the consumer can be prompted to kind in a message, with an choice to stop utilizing exit or q.
  • If the tool_results just isn’t empty, user_message can be set to the collected software outputs to be despatched to the mannequin. tool_results is cleared to keep away from resending the identical software outputs on the subsequent loop iteration.
  • The present date (today_date) is obtained for use by the mannequin to make time-aware selections.
  • Calls consumer.responses.create to generate the mannequin’s response and it accepts the next parameters:
    • mannequin: set to gpt-5-mini.
    • enter: accepts the consumer’s message.
    • directions: set to present’s date (today_date).
    • instruments: set to the software schema that was outlined earlier.
    • previous_response_id: set to the earlier response’s id so the mannequin can keep dialog state.
    • textual content: verbosity is ready to low to maintain mannequin’s response concise.
    • reasoning: GPT-5-mini is a reasoning mannequin, set the reasoning’s effort to low for quicker’s response. For extra complicated duties we will set it to excessive.
    • retailer: tells the mannequin to retailer the present’s response so it may be retrieved later and helps with dialog continuity.
  • prev_response_id is ready to present’s response id so the subsequent perform name can thread onto the identical dialog.
for output in response.output:
    if output.kind == "reasoning":
        print("Assistant: ","Reasoning ....")

        for reasoning_summary in output.abstract:
            print("Assistant: ",reasoning_summary)

    elif output.kind == "message":
        for merchandise in output.content material:
            print("Assistant: ",merchandise.textual content)

    elif output.kind == "function_call":
        # get hold of perform identify 
        function_name = globals().get(output.identify)
        # masses perform arguments 
        args = json.masses(output.arguments)
        function_response = function_name(**args)
        # append software outcomes checklist with the the perform name's id and performance's response 
        tool_results.append(
            {
                "kind": "function_call_output",
                "call_id": output.call_id,
                "output": json.dumps(function_response)
            }
        )

This processes the mannequin’s response output and does the next;

  • If the output kind is reasoning, print every merchandise within the reasoning abstract.
  • If the output kind is message, iterate via the content material and print every textual content merchandise.
  • If the output kind is a perform name, get hold of the perform’s identify, parse its arguments, and cross them to the perform (web_search) to generate a response. On this case, the net search response incorporates up-to-date data related to the consumer’s message. Lastly appends the perform name’s response and performance name id to tool_results. This lets the subsequent loop ship the software consequence again to the mannequin.

Full Code for the Internet Search Agent

from datetime import datetime, timezone
import json
from openai import OpenAI
import os 
from tavily import TavilyClient

tavily = TavilyClient(api_key=os.getenv("TAVILY_API_KEY"))

def web_search(question: str, num_results: int = 10):
    strive:
        consequence = tavily.search(
            question=question,
            search_depth="primary",
            max_results=num_results,
            include_answer=False,       
            include_raw_content=False,
            include_images=False
        )

        outcomes = consequence.get("outcomes", [])

        return {
            "question": question,
            "outcomes": outcomes, 
            "sources": [
                {"title": r.get("title", ""), "url": r.get("url", "")}
                for r in results
            ]
        }

    besides Exception as e:
        return {
            "error": f"Search error: {e}",
            "question": question,
            "outcomes": [],
            "sources": [],
        }


tool_schema = [
    {
        "type": "function",
        "name": "web_search",
        "description": """Execute a web search to fetch up to date information. Synthesize a concise, 
        self-contained answer from the content of the results of the visited pages.
        Fetch pages, extract text, and provide the best available result while citing 1-3 sources (title + URL). "
        If sources conflict, surface the uncertainty and prefer the most recent evidence.
        """,
        "strict": True,
        "parameters": {
            "type": "object",
            "properties": {
                "query": {
                    "type": "string",
                    "description": "Query to be searched on the web.",
                },
            },
            "required": ["query"],
            "additionalProperties": False
        },
    },
]

consumer = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))

# tracker for the final mannequin's response id to take care of dialog's state 
prev_response_id = None

# an inventory for storing software's outcomes from the perform name 
tool_results = []

whereas True:
    # if the software outcomes is empty immediate message 
    if len(tool_results) == 0:
        user_message = enter("Person: ")

        """ instructions for exiting chat """
        if isinstance(user_message, str) and user_message.strip().decrease() in {"exit", "q"}:
            print("Exiting chat. Goodbye!")
            break

    else:
        # set the consumer's messages to the software outcomes to be despatched to the mannequin 
        user_message = tool_results.copy()
    
        # clear the software outcomes for the subsequent name 
        tool_results = []

    # get hold of present's date to be handed into the mannequin as an instruction to help in resolution making
    today_date = datetime.now(timezone.utc).date().isoformat()     

    response = consumer.responses.create(
        mannequin = "gpt-5-mini",
        enter = user_message,
        directions=f"Present date is {today_date}.",
        instruments = tool_schema,
        previous_response_id=prev_response_id,
        textual content = {"verbosity": "low"},
        reasoning={
            "effort": "low",
        },
        retailer=True,
        )
    
    prev_response_id = response.id


    # Handles mannequin response's output 
    for output in response.output:
        
        if output.kind == "reasoning":
            print("Assistant: ","Reasoning ....")

            for reasoning_summary in output.abstract:
                print("Assistant: ",reasoning_summary)

        elif output.kind == "message":
            for merchandise in output.content material:
                print("Assistant: ",merchandise.textual content)

        # checks if the output kind is a perform name and append the perform name's outcomes to the software outcomes checklist
        elif output.kind == "function_call":
            # get hold of perform identify 
            function_name = globals().get(output.identify)
            # masses perform arguments 
            args = json.masses(output.arguments)
            function_response = function_name(**args)
            # append software outcomes checklist with the the perform name's id and performance's response 
            tool_results.append(
                {
                    "kind": "function_call_output",
                    "call_id": output.call_id,
                    "output": json.dumps(function_response)
                }
            )

Once you run the code, you’ll be able to simply chat with the agent to ask questions that require the newest data, comparable to the present climate or the newest product releases. The agent responds with up-to-date data together with the corresponding sources from the web. Under is a pattern output from the terminal.

Person: What's the climate like in London at present?
Assistant:  Reasoning ....
Assistant:  Reasoning ....
Assistant:  Proper now in London: overcast, about 18°C (64°F), humidity ~88%, mild SW wind ~16 km/h, no precipitation reported. Supply: WeatherAPI (present situations) — https://www.weatherapi.com/

Person: What's the newest iPhone mannequin?
Assistant:  Reasoning ....
Assistant:  Reasoning ....
Assistant:  The most recent iPhone fashions are the iPhone 17 lineup (together with iPhone 17, iPhone 17 Professional, iPhone 17 Professional Max) and the brand new iPhone Air — introduced by Apple on Sept 9, 2025. Supply: Apple Newsroom — https://www.apple.com/newsroom/2025/09/apple-debuts-iphone-17/

Person: Multiply 500 by 12.           
Assistant:  Reasoning ....
Assistant:  6000
Person: exit   
Exiting chat. Goodbye!

You possibly can see the outcomes with their corresponding net sources. Once you ask it to carry out a process that doesn’t require up-to-date data, comparable to maths calculations or writing code the agent responds immediately with none net search.

Notice: The online search agent is a straightforward, single-tool agent. Superior agentic programs orchestrate a number of specialised instruments and use environment friendly reminiscence to take care of context, plan, and clear up extra complicated duties.

Conclusion

On this submit I defined how an AI agent works and the way it extends the capabilities of a big language mannequin to work together with its atmosphere, carry out actions and clear up duties via using instruments. I additionally defined perform calling and the way it allows LLMs to name instruments. I demonstrated create a software schema for perform calling that defines when and the way an LLM ought to name a software to carry out an motion. I outlined an online search perform utilizing Tavily to fetch data from the net after which confirmed step-by-step construct an online search agent utilizing perform calling and GPT-5-mini because the LLM. In the long run, we constructed an online search agent able to retrieving up-to-date data from the web to reply consumer queries.

Try my GitHub repo, GenAI-Programs the place I’ve printed extra programs on varied Generative AI subjects. It additionally features a information on constructing an Agentic RAG utilizing perform calling.

Attain out to me through:

E mail: [email protected]

Linkedin: https://www.linkedin.com/in/ayoola-olafenwa-003b901a9/

References

https://platform.openai.com/docs/guides/function-calling?api-mode=responses

https://docs.tavily.com/documentation/api-reference/endpoint/search

Tags: AgentBuildcallingFunctionGPT5
Admin

Admin

Next Post
We’re Giving Away A Swap 2 Console & Pokemon Legends: Z-A

We're Giving Away A Swap 2 Console & Pokemon Legends: Z-A

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Trending.

Safety Amplified: Audio’s Affect Speaks Volumes About Preventive Safety

Safety Amplified: Audio’s Affect Speaks Volumes About Preventive Safety

May 18, 2025
Reconeyez Launches New Web site | SDM Journal

Reconeyez Launches New Web site | SDM Journal

May 15, 2025
Flip Your Toilet Right into a Good Oasis

Flip Your Toilet Right into a Good Oasis

May 15, 2025
Discover Vibrant Spring 2025 Kitchen Decor Colours and Equipment – Chefio

Discover Vibrant Spring 2025 Kitchen Decor Colours and Equipment – Chefio

May 17, 2025
Apollo joins the Works With House Assistant Program

Apollo joins the Works With House Assistant Program

May 17, 2025

TechTrendFeed

Welcome to TechTrendFeed, your go-to source for the latest news and insights from the world of technology. Our mission is to bring you the most relevant and up-to-date information on everything tech-related, from machine learning and artificial intelligence to cybersecurity, gaming, and the exciting world of smart home technology and IoT.

Categories

  • Cybersecurity
  • Gaming
  • Machine Learning
  • Smart Home & IoT
  • Software
  • Tech News

Recent News

Goldilocks RL: Tuning Job Problem to Escape Sparse Rewards for Reasoning

Goldilocks RL: Tuning Job Problem to Escape Sparse Rewards for Reasoning

March 22, 2026
Crucial Quest KACE Vulnerability Probably Exploited in Assaults

Crucial Quest KACE Vulnerability Probably Exploited in Assaults

March 22, 2026
  • About Us
  • Privacy Policy
  • Disclaimer
  • Contact Us

© 2025 https://techtrendfeed.com/ - All Rights Reserved

No Result
View All Result
  • Home
  • Tech News
  • Cybersecurity
  • Software
  • Gaming
  • Machine Learning
  • Smart Home & IoT

© 2025 https://techtrendfeed.com/ - All Rights Reserved