Facebook iconGroq Function calling and its Tool use - F22 Labs
Blogs/AI

Groq Function calling and its Tool use

Feb 13, 20255 Min Read
Written by Sakthivel
Groq Function calling and its Tool use Hero

Function calling with Large Language Models(LLMs) is a technique that enhances the capabilities of AI systems by allowing them to interact with external functions or tools. This approach enables LLMs to recognise when specific functions are needed to complete a task and to generate appropriate inputs for those functions. 

By integrating external capabilities, LLMs can produce more structured and reliable outputs, particularly for tasks involving data retrieval, API interactions, or complex computations. This bridging of natural language understanding with specific computational tasks significantly expands the range and accuracy of operations an LLM can perform, making them more versatile and powerful in real-world applications.

Tool Use with Groq API

Groq provides similar capabilities for tool use in their AI models. Let's compare their approaches:

Tools Specifications

Tools: A list where each item is a specific tool, showing all the available features.

Type:  A category name for each tool, making it easier to find and organize them.

Function:

  • Description: Explains what the function does and when to use it.
  • Name: A unique name for each function, so you can easily call it when needed.
  • Parameters: A guide on what inputs the function needs, including required and optional inputs, their types, and any rules. This ensures the function is used correctly.

Tool Choice

The tool_choice parameter lets the model know if it can use tools and which ones to use. This makes the model more flexible for different needs. A Groq uses a parameter to control tool usage:

Groq's tool_choice parameter options:

  • "none" or null: Text-only responses, no function calls.
  • "auto": Model decides between text or function calls.
  • "required"  or specific function name: Forces the model to call a function.

Tool Structure

When a model uses a tool, it sends back a response with something called a tool_call object. 

Partner with Us for Success

Experience seamless collaboration and exceptional results.

Groq's tool object:

  • id: Unique identifier for the tool call.
  • name: The name of the tool being used.
  • parameters: Object with all necessary details for the tool's operation.

Example Tool Structure

tools =[
{
        "type": "function",
        "function": {
            "name": "get_repo_names",
            "description": "Get repository names and links from GitHub based on username",
            "parameters": {
                "type": "object",
                "properties": {
                    "username": {
                        "type": "string",
                        "description": "The GitHub username",
                    }
                },
                "required": ["username"],
            },
        },
    },
]

Handling Tool Results 

  1. The tool is executed based on the call structure.
  2. The result is returned to the model.
  3. The model incorporates the result into its response.

Example - Google Calendar Agent

1. Library Imports and Initialization

We import the necessary libraries and set up our tools, including selected Groq API and Google Calendar. Credentials for API are loaded from the .env file and for Google Calendar API credentials have been taken using the Google Calendar Simple API documentation and the timezone is set to UTC.

from groq import Groq
import json
from dotenv import load_dotenv
load_dotenv()
client = Groq(api_key=api_key)
MODEL = 'llama3-groq-70b-8192-tool-use-preview' #selected models
gc= GoogleCalendar(credentials_path=os.getenv("CREDENTAILS_PATH"))
timezone = pytz.UTC

Looking for a real-world implementation of function calling? Our AI POC demonstrates how we built an intelligent Google Calendar agent that leverages Groq's function calling capabilities to handle event creation, scheduling, and management. This implementation showcases the practical application of tool use in AI systems.

2. Tool for Create event function:

The create_event tool helps set up calendar events by requiring details like the event title, attendees' emails, and timing. It ensures correct information is provided and creates the event accordingly.

tools =[{
"type": "function",
"function": {
    "name": "create_event",
    "description": "Create a Google Calendar event",
    "parameters": {
        "type": "object",
        "properties": {
        	"event_title": {
                "type": "string",
                 "description": "The title of the event",
                 },
            "start_date": {
                 "type": "string",
                 "description": "The start date of the event in YYYY-MM-DD format",
                  },
              "start_time": {
                  "type": "string",
                  "description": "The start time of the event in HH:MM:SS format in 'Asia/Kolkata' timezone",
                  },
               "end_time": {
                    "type": "string",
                    "description": "The end time of the event in HH:MM:SS format in 'Asia/Kolkata' timezone",
                  },
                "emails": {
                     "type": "array",
                      "items": {
                           "type": "string"
                   			},
                       "description": "The emails of the attendees",
                    }
               },
   		"required": ["event_title", "start_date", "start_time", "end_time", "emails"],
            },
         },
    }
]

3. Function for Creating an Event:

The create_event function sets up a new event by checking for scheduling conflicts and attendee availability. If everything is clear, it creates the event on Google Calendar and handles any errors that occur.

def create_event(event_title, emails, start, end):
    reminder_minutes = 30
    min_time = dateparser.parse(start).astimezone(timezone)
    max_time = dateparser.parse(end).astimezone(timezone)
    busy_info = check_busy_events(start, end)
    if busy_info:
        return busy_info
    all_free, busy_details = check_users_availability(emails, start, end)
    if not all_free:
        return f"{busy_details}"
    try:
        attendees = [Attendee(email=email) for email in emails]
        event = Event(
            event_title,
            start=min_time,
            end=max_time,
     reminders=[EmailReminder(minutes_before_start=reminder_minutes)],
            attendees=attendees,
            conference_solution=ConferenceSolutionCreateRequest(solution_type=SolutionType.HANGOUTS_MEET),
        )
        event = gc.add_event(event)
        return f"Event '{event_title}' created successfully."

    except ValueError as ve:
        return f"Error: {ve}"

4. Handling Conversations

 The run_conversation function processes user inputs and gets responses from the API. If API suggests using a tool, the function calls it and displays the results.

msg = {"role": "system", "content": "Don't make assumptions about what values to plug into functions. Ask for clarification if a user request is ambiguous."}
messages = [msg]
def run_conversation(user_input, history):
    global messages, msg
    messages.append({"role": "user", "content": user_input})
    response = chat_completion_request(messages, tools)
    re_msg = response.choices[0].message
    if re_msg.tool_calls == None:
    return str(re_msg)
    else:
        tool_call_lst = re_msg.tool_calls
        print(tool_call_lst)
        available_functions = {
                "create_event": create_event,
        }
        messages = [msg]
        for tool_call in tool_call_lst:
           function_name = tool_call.function.name
           function_to_call= available_functions[function_name]
function_args= json.loads(tool_call.function.arguments)
            function_response = function_to_call(**function_args)
return str(function_response)
  1. Handling Input and output : 

Partner with Us for Success

Experience seamless collaboration and exceptional results.

User-request: Create a meeting with jhon@gamil.com from 7 AM to 9 AM on 27th May 2024.

Detected function / Tool :

ChatCompletionMessageToolCall(
id='call_0ph5', 
function=Function(arguments='"event_title":"Syncup","emails":["Jhon@gmail.com"],"start": "2024-05-27T07:00:00","end":"2024-05-27T09:00:00", name='create_event'), 
type='function')

Function Call response :

Event 'Syncup' created successfully.
Function calling example in chatbot

Conclusion

Groq offers function-calling capabilities that provide a flexible approach, resembling standard Python programming. This approach allows developers to define and manage function calls with more control, enabling customization and tailored logic handling. Groq’s implementation is particularly suited for those who need a high level of flexibility and are comfortable building custom logic to manage function calls and responses.

When considering Groq for your projects, it's important to evaluate your specific requirements, including the need for control, ease of integration, and the level of custom logic you're prepared to implement. Groq’s approach can be a powerful choice for those looking to craft unique solutions tailored to their application's needs.

Frequently Asked Questions?

1. How does Groq's tool_choice parameter work?

Groq's tool_choice parameter controls tool usage in the model. It can be set to "none" for text-only responses, "auto" for the model to decide, or "required" to force function calls, providing flexibility for different needs.

2. What are the key components of Groq's tool call structure?

Groq's tool call structure includes an id (unique identifier), name (of the tool being used), and parameters (object with necessary details for the tool's operation). This structure helps manage and execute function calls effectively.

3. How does the run_conversation function work in Groq's implementation?

The run_conversation function processes user inputs and gets responses from the API. If the API suggests using a tool, the function calls it and displays the results, managing the flow of conversation and tool usage.

Author-Sakthivel
Sakthivel

A software engineer fascinated by AI and automation, dedicated to building efficient, scalable systems. Passionate about technology and continuous improvement.

Phone

Next for you

How To Use Local LLMs with Ollama? (A Complete Guide) Cover

AI

Jul 2, 20256 min read

How To Use Local LLMs with Ollama? (A Complete Guide)

AI tools like chatbots and content generators are everywhere. But usually, they run online using cloud services. What if you could run those smart AI models directly on your own computer, just like running a regular app? That’s what Ollama helps you do.  In this blog, you’ll learn how to set it up, use it in different ways (like with terminal, code, or API), change some basic settings, and know what it can and can't do. What is Ollama? Ollama is a software that allows you to use large, power

Graph RAG vs Temporal Graph RAG: How AI Understands Time Cover

AI

Jul 2, 20254 min read

Graph RAG vs Temporal Graph RAG: How AI Understands Time

What if AI could rewind time to answer your questions? Most AI tools today focus on what happened, but not WHEN it happened. That’s where Temporal Graph RAG steps in. It combines the power of knowledge graphs with time-aware intelligence to give more accurate, contextual answers. In this blog, you’ll learn: * What Graphs and Knowledge Graphs are * How Graph RAG works and why it’s smarter than regular RAG * How Temporal Graph RAG takes it to the next level with time-aware intelligence Wha

What is Multi-Step RAG (A Complete Guide) Cover

AI

Jul 2, 20258 min read

What is Multi-Step RAG (A Complete Guide)

Traditional Retrieval-Augmented Generation (RAG) retrieves relevant documents once and generates a response using a fixed context. While effective for simple queries, it often fails with complex, multi-hop, or ambiguous questions due to its single-step, static approach. Multi-Step RAG addresses these limitations by introducing iterative retrieval and reasoning. After an initial retrieval, the system analyzes the retrieved context to identify sub-tasks or refine the query, performing multiple re