
Function calling with Large Language Models (LLMs) allows models to interact with external tools and systems in a controlled way. I wrote this guide for developers who want to understand how Groq approaches tool use in practice, without overengineering or abstract theory getting in the way. This approach allows LLMs to identify when external functions are required and to generate structured inputs for those functions, improving reliability in tasks like API calls, scheduling, and data retrieval.
By integrating external capabilities, LLMs can produce more structured and reliable outputs, particularly for tasks involving data retrieval, API interactions, or complex computations. By combining natural language understanding with deterministic execution, function calling expands what LLMs can safely handle in real-world applications, making them more versatile and powerful in real-world applications.
Groq supports structured tool use that closely mirrors traditional programming workflows, giving developers predictable control over when and how functions are invoked. Let's compare their approaches:

Tools: A structured list defining every function the model is allowed to call, including scope and usage constraints.
Type: A category name for each tool, making it easier to find and organize them.
Function:
The tool_choice parameter controls whether the model may respond with text, invoke tools automatically, or is forced to call a specific function. This makes the model more flexible for different needs. A Groq uses a parameter to control tool usage:
Groq's tool_choice parameter options:
When a tool is invoked, the model returns a tool_call object that clearly separates intent, function name, and parameters.
Walk away with actionable insights on AI adoption.
Limited seats available!
Groq's tool object:
Example Tool Structure
tools =[
{
"type": "function",
"function": {
"name": "get_repo_names",
"description": "Get repository names and links from GitHub based on username",
"parameters": {
"type": "object",
"properties": {
"username": {
"type": "string",
"description": "The GitHub username",
}
},
"required": ["username"],
},
},
},
]
Required libraries are imported, and the Groq client and Google Calendar integrations are initialized to support tool-based event creation, including selected Groq API and Google Calendar. Credentials for API are loaded from the .env file and for Google Calendar API credentials have been taken using the Google Calendar Simple API documentation and the timezone is set to UTC.
from groq import Groq
import json
from dotenv import load_dotenv
load_dotenv()
client = Groq(api_key=api_key)
MODEL = 'llama3-groq-70b-8192-tool-use-preview' #selected models
gc= GoogleCalendar(credentials_path=os.getenv("CREDENTAILS_PATH"))
timezone = pytz.UTCLooking for a real-world implementation of function calling? Our AI POC demonstrates how we built an intelligent Google Calendar agent that leverages Groq's function calling capabilities to handle event creation, scheduling, and management. This implementation showcases the practical application of tool use in AI systems.
The create_event tool defines a strict schema for event creation, ensuring required details such as timing and attendees are validated before execution. It ensures correct information is provided and creates the event accordingly.
tools =[{
"type": "function",
"function": {
"name": "create_event",
"description": "Create a Google Calendar event",
"parameters": {
"type": "object",
"properties": {
"event_title": {
"type": "string",
"description": "The title of the event",
},
"start_date": {
"type": "string",
"description": "The start date of the event in YYYY-MM-DD format",
},
"start_time": {
"type": "string",
"description": "The start time of the event in HH:MM:SS format in 'Asia/Kolkata' timezone",
},
"end_time": {
"type": "string",
"description": "The end time of the event in HH:MM:SS format in 'Asia/Kolkata' timezone",
},
"emails": {
"type": "array",
"items": {
"type": "string"
},
"description": "The emails of the attendees",
}
},
"required": ["event_title", "start_date", "start_time", "end_time", "emails"],
},
},
}
]
The create_event function validates availability, resolves conflicts, and creates the calendar event while handling execution errors gracefully and attendee availability. If everything is clear, it creates the event on Google Calendar and handles any errors that occur.
def create_event(event_title, emails, start, end):
reminder_minutes = 30
min_time = dateparser.parse(start).astimezone(timezone)
max_time = dateparser.parse(end).astimezone(timezone)
busy_info = check_busy_events(start, end)
if busy_info:
return busy_info
all_free, busy_details = check_users_availability(emails, start, end)
if not all_free:
return f"{busy_details}"
try:
attendees = [Attendee(email=email) for email in emails]
event = Event(
event_title,
start=min_time,
end=max_time,
reminders=[EmailReminder(minutes_before_start=reminder_minutes)],
attendees=attendees,
conference_solution=ConferenceSolutionCreateRequest(solution_type=SolutionType.HANGOUTS_MEET),
)
event = gc.add_event(event)
return f"Event '{event_title}' created successfully."
except ValueError as ve:
return f"Error: {ve}"The run_conversation function orchestrates user input, tool invocation, and response handling, ensuring that tool calls are executed only when required. If the API suggests using a tool, the function calls it and displays the results.
msg = {"role": "system", "content": "Don't make assumptions about what values to plug into functions. Ask for clarification if a user request is ambiguous."}
messages = [msg]
def run_conversation(user_input, history):
global messages, msg
messages.append({"role": "user", "content": user_input})
response = chat_completion_request(messages, tools)
re_msg = response.choices[0].message
if re_msg.tool_calls == None:
return str(re_msg)
else:
tool_call_lst = re_msg.tool_calls
print(tool_call_lst)
available_functions = {
"create_event": create_event,
}
messages = [msg]
for tool_call in tool_call_lst:
function_name = tool_call.function.name
function_to_call= available_functions[function_name]
function_args= json.loads(tool_call.function.arguments)
function_response = function_to_call(**function_args)
return str(function_response)Walk away with actionable insights on AI adoption.
Limited seats available!
The user request is interpreted, mapped to a structured tool call, executed, and returned as a deterministic result from 7 AM to 9 AM on 27th May 2024.
Detected function / Tool :
ChatCompletionMessageToolCall(
id='call_0ph5',
function=Function(arguments='"event_title":"Syncup","emails":["Jhon@gmail.com"],"start": "2024-05-27T07:00:00","end":"2024-05-27T09:00:00", name='create_event'),
type='function')Function Call response :
Event 'Syncup' created successfully.Groq’s function-calling approach offers a controlled, developer-friendly way to integrate external tools while maintaining predictability and low latency, resembling standard Python programming. This approach allows developers to define and manage function calls with more control, enabling customization and tailored logic handling. Groq’s implementation is particularly suited for those who need a high level of flexibility and are comfortable building custom logic to manage function calls and responses.
When considering Groq for your projects, it's important to evaluate your specific requirements, including the need for control, ease of integration, and the level of custom logic you're prepared to implement. This makes Groq a strong option for teams that value flexibility, explicit control, and custom logic in production-grade AI systems tailored to their application's needs.
Groq's tool_choice parameter controls tool usage in the model. It can be set to "none" for text-only responses, "auto" for the model to decide, or "required" to force function calls, providing flexibility for different needs.
Groq's tool call structure includes an id (unique identifier), name (of the tool being used), and parameters (object with necessary details for the tool's operation). This structure helps manage and execute function calls effectively.
The run_conversation function processes user inputs and gets responses from the API. If the API suggests using a tool, the function calls it and displays the results, managing the flow of conversation and tool usage.
Walk away with actionable insights on AI adoption.
Limited seats available!