Bring Your Own LLM
Use Play.ai agents with responses generated by your own LLM.
At Play.ai, we provide a built-in LLM+RAG system that you can use to power your agents. However, we understand that you may want to use your own LLM for various reasons, such as:
- You have a custom LLM that you have trained on your own data.
- You have a license to use a specific LLM that your company has purchased.
- You want to use a specific LLM that we do not provide.
In these cases, you can have your own LLM generate responses for your agents at Play.ai. We provide an API that allows you to configure your agent with your own LLM, as long as it is OpenAI API-Compatible, and a simple API call to update your agent is all you need to start using it.
How to bring your own LLM
To bring your own LLM, you need:
- A Play.ai account
- An API key to authenticate with the Play.ai API
- An LLM that provides an OpenAI-Compatible API
- Calling the API with the
llm
field set in the Update Agent endpoint
To bring your own OpenAI API-Compatible LLM, you will need to provide us with and . We will use this information to communicate with your LLM when your agent needs to generate responses.
Updating an agent to use your own LLM
Once you have created an agent, you can update it with your LLM. To do so, call our Update Agent endpoint with the llm
field set.
For a full step-by-step guide on how to bring your own LLM, see the tutorial below.
For example, assuming agentId
is the ID of an agent you have created via our Web UI or through our Create Agent endpoint, the request below would update the agent:
For a full step-by-step guide on how to bring your own LLM, make sure to check the tutorial below.
Frequently Asked Questions
Can I set custom headers or other configurations?
Yes, you can set custom headers or other configurations
through .
Current configuration options include:
,
, , and more. Check the details on the llm
field in the Update Agent or Create Agent API pages.
What are the compatibility requirements for my LLM API?
We expect the LLM URL provided
at to be OpenAI API-Compatible.
In other words, it must be an API that serves an endpoint at /chat/completions
that returns an SSE response when prompted.
Typically, these APIs can be used directly with OpenAI’s or LangChain’s SDKs.
Below, you can find an example of a request and response to an OpenAI-Compatible API.
This is, therefore, the contract your LLM’s API must fulfill to be compatible with Play.ai.
How can I test if my LLM API is compatible?
Besides manually verifying if your API returns a response similar to the one mentioned above, , our backend will test your API for compatibility before returning a successful response.
If the fails the compatibility check, our servers will
return a 400
error with a message describing the issues found.
How to use my own RAG and integrations?
Currently, you can use the built-in RAG and integrations provided by Play.ai, having your LLM work only as a response generator.
Alternatively, it is entirely possible to use your own RAG and integrations under the hood of your LLM API, having your LLM interact with them before generating responses.
We are working on supporting custom RAGs and integrations, in a similar way to how we support custom LLMs. Stay tuned for updates on this page.
Bring Your Own LLM Tutorial
To bring your own LLM, you need:
- A Play.ai account
- An API key to authenticate with the Play.ai API
- An LLM that provides an OpenAI-Compatible API
- Calling the API with the
llm
field set in the Update Agent endpoint
To bring your own LLM to Play.ai, follow the steps below:
Create your Play.ai account and get your User ID and API Key
Create your agent
To set the LLM, you will need the ID of an existing agent, e.g. myagent-123tVPa7XMBX5Dmym613j
. Your existing
agents (created via the API or the Web UI) can be found in your my agents page.
To create a new agent, you can use our Web UI or our Create Agent (POST) endpoint.
Update the agent with your LLM
llm
field set would have the same effect described in this step.Finally, you need to have an LLM that provides an OpenAI-Compatible API and call our endpoint to update your agent.
With your URL and API key in hand, you can update your agent with your LLM. To do this, call our Update Agent endpoint with the llm
field set, as described next.
Assuming agentId
is the ID of an agent — created either via our Web UI or
through our Create Agent endpoint —, the request below would update the agent:
If you get a 200
response, your agent is now configured to use your LLM. The new settings will affect all
conversations started after the update.
There are checks in place: When updating the llm
property of an agent (or creating a new one with llm
set), our backend will test your
API for compatibility before returning a successful response. If
the fails the compatibility check,
our servers will return a 400
error with a message describing the issues found.
Try it out
To test your agent, you can talk to it using our Web UI at the direct URL https://play.ai/agent/{{yourAgentId}}
or
you can leverage our Websocket API.