Facebook iconHow to Use Hugging Face with OpenAI-Compatible APIs?
Blogs/AI

How to Use Hugging Face with OpenAI-Compatible APIs?

Written by Dharshan
Oct 22, 2025
4 Min Read
How to Use Hugging Face with OpenAI-Compatible APIs? Hero

As large language models become more widely adopted, developers are looking for flexible ways to integrate them without being tied to a single provider. Hugging Face’s newly introduced OpenAI-compatible API offers a practical solution, allowing you to run models like LLaMA, Mixtral, or DeepSeek using the same syntax as OpenAI’s Python client. According to Hugging Face, hundreds of models are now accessible using the OpenAI-compatible client across providers like Together AI, Replicate, and more.

In this article, you’ll learn how to set up and use the OpenAI-compatible interface step by step: from configuring your environment and authenticating your API key, to choosing the right model and provider, and making your first chat completion request. We’ll also look at how to compare different providers based on speed, cost, or availability, all without changing your existing code.

Start building with more flexibility, right from your existing codebase.

What is Hugging Face Inference Providers?

Hugging Face Inference Providers is a system designed for you to run AI models from lots of different backends: Hugging Face's own servers, AWS, Azure, or third-party companies, all through one single interface. You don't need to learn one API for each provider; with a consistent and combined method, you can do it. 

This is particularly helpful for developers who prefer to shop around between providers due to performance, cost or availability, but don’t want to modify code as they do so. Combine that with OpenAI compatibility, and that means you can write OpenAI-style code and run it on models hosted anywhere Hugging Face does.

OpenAI Compatibility in Hugging Face

Hugging Face recently introduced support for OpenAI-compatible APIs, allowing you to use functions like ChatCompletion.create() or Embedding.create() just as you would with the OpenAI Python client. The key difference is that instead of sending your request to OpenAI’s servers, you point it to Hugging Face’s API, which can route the call to a variety of models both open and third-party. This makes it possible to plug in alternatives like Mixtral, Kimi, or LLaMA with minimal changes to your existing code.

Suggested Reads- How To Use Open Source LLMs (Large Language Model)?

How to Set Up OpenAI-Compatible APIs on Hugging Face

To use OpenAI-style code with Hugging Face, you only need to update your API settings and model reference. This section walks you through the exact steps to get started, including how to select specific providers like Together AI or Replicate. 

Unlike OpenAI, you must also specify which provider will run the model by adding a:provider suffix to the model name. This section shows exactly how to set it up.

Step 1: Install Required Packages

pip install openai python-dotenv

Step 2: Configure Your API Key

Create a .env file and add your Hugging Face token:

HF_TOKEN=hf_your_token_here

Then, load it in your Python script:

from dotenv import load_dotenv
import os
load_dotenv()
api_key = os.getenv("HF_TOKEN")

Step 3: Initialize the OpenAI Client

from openai import OpenAI
client = OpenAI(
    base_url="https://router.huggingface.co/v1",
    api_key=api_key
)

Step 4: Run a Model (Specify Provider Required)

You must include the provider in the model name using the :provider format for example: model-id:provider

Hugging Face x OpenAI: Powering Better Models
Exploring the future of artificial intelligence
Murtuza Kutub
Murtuza Kutub
Co-Founder, F22 Labs

Walk away with actionable insights on AI adoption.

Limited seats available!

Calendar
Saturday, 1 Nov 2025
10PM IST (60 mins)

You can explore available models here: https://huggingface.co/models

To check which inference providers support a model:

  1. Open the model page on Hugging Face.
  2. In the top-right corner, click Deploy.
  3. Then click Inference API Providers.
  4. You'll see a list of supported providers for that model.
response = client.chat.completions.create(
    model="deepseek-ai/DeepSeek-R1:together",  # ":any other providers" is required
    messages=[{"role": "user", "content": "Tell me a fun fact."}]
)
print(response.choices[0].message.content)

If you don’t want to specify a provider manually, you can use :auto — it will automatically select a supported provider.

Exploring Inference Providers on Hugging Face

Hugging Face's Inference Providers system gives you access to a wide range of AI models hosted by different backend providers all through one unified API. When using the OpenAI-compatible interface, specifying the provider is required by adding a suffix like :together or :replicate to the model name. This tells Hugging Face exactly where to route the request.

Hugging Face x OpenAI: Powering Better Models
Exploring the future of artificial intelligence
Murtuza Kutub
Murtuza Kutub
Co-Founder, F22 Labs

Walk away with actionable insights on AI adoption.

Limited seats available!

Calendar
Saturday, 1 Nov 2025
10PM IST (60 mins)

Each provider offers different strengths, some are optimized for speed, others for specific hardware, and some for cost-efficiency. Here's a list of the most commonly used providers you can access via Hugging Face:

ProviderSuffixHighlights

Hugging Face

:hf-inference

Models hosted directly by Hugging Face

Together AI

:together

Fast LLM inference with sub-100 ms latency

Replicate

:replicate

Supports both text and image models

fal.ai

:fal-ai

Lightweight, fast response time

SambaNova

:sambanova

Enterprise-grade AI infrastructure

Groq

:groq

High-speed inference on custom silicon

Nscale

:nscale

Scalable inference with private model hosting

Cerebras

:cerebras

AI models running on wafer-scale compute

Hugging Face

Suffix

:hf-inference

Highlights

Models hosted directly by Hugging Face

1 of 8

To use any of these, just append the suffix to your model name. For example:

model="deepseek-ai/DeepSeek-R1:together"

You can browse huggingface.co/models and filter by provider to find out which models are available under each backend. If you use a model without a supported provider or forget the suffix, the request will fail so it’s important to get this right.

This system gives you flexibility to try different models or backends just by changing the provider tag, all without modifying your application logic.

Conclusion

In conclusion, setting up Hugging Face’s OpenAI-compatible API involves just a few key steps: updating the base URL, providing your Hugging Face token, and including the required provider suffix in the model name. This simple setup allows developers to access a wide range of models without changing their existing code. 

Throughout this blog, we explored how this compatibility works, why specifying a provider is essential, and how it fits into Hugging Face’s broader inference system. It’s a practical and flexible approach for anyone looking to build with language models beyond a single provider, and developers who want to understand alternative transport mechanisms can also check STDIO transport in MCP to see how other protocols handle similar connections.

Author-Dharshan
Dharshan

Passionate AI/ML Engineer with interest in OpenCV, MediaPipe, and LLMs. Exploring computer vision and NLP to build smart, interactive systems.

Share this article

Phone

Next for you

How to Use UV Package Manager for Python Projects Cover

AI

Oct 29, 20254 min read

How to Use UV Package Manager for Python Projects

Managing Python packages and dependencies has always been a challenge for developers. Tools like pip and poetry have served well for years, but as projects grow more complex, these tools can feel slow and cumbersome.  UV is a modern, high-performance Python package manager written in Rust, built as a drop-in replacement for pip and pip-tools. It focuses on speed, reliability, and ease of use rather than adding yet another layer of complexity. According to benchmarks from Astral, UV installs pac

15 Best AI Code Generators of 2025 (Reviewed) Cover

AI

Oct 17, 202521 min read

15 Best AI Code Generators of 2025 (Reviewed)

With most developers now relying on AI in their workflow, the question isn’t if you’ll use a code generator in 2025, but which one can deliver the most reliable, context-aware support. In just a few years, AI coding assistants have evolved from autocomplete tools to full-scale collaborators, capable of scaffolding projects, debugging complex systems, and even generating production-ready applications. Stack Overflow’s 2023 Developer Survey mentioned that nearly 70% of developers already use AI t

12 Replit Alternatives for Development in 2025 Cover

AI

Oct 15, 202512 min read

12 Replit Alternatives for Development in 2025

Is Replit still the best choice for cloud-based development in 2025? For years, Replit has been one of the most popular online IDEs, thanks to its instant setup, collaborative editing, and growing ecosystem of AI tools. For students and indie developers, it has often been the first stop for quick coding experiments. For teams, it has offered a fast way to collaborate without heavy local setups. But the developer ecosystem has changed. As projects scale, many find that Replit struggles with perf