curl --request PATCH \
  -- url 'https://api.play.ai/api/v1/agents/{{agentId}}' \
  --header 'Authorization : Bearer {{yourApiKey}}' \
  --header 'X-User-Id: {{yourUserId}}' \
  --header 'Content-Type: application/json' \
  --data-raw '{
    "llm": {
      "baseURL": "https://my.own.llm.api.example.com",
      "apiKey": "sk-the-api-key-for-the-llm-api",
    }
  }'

Use Play.ai agents with responses generated by your own LLM.

At Play.ai, we provide a built-in LLM+RAG system that you can use to power your agents. However, we understand that you may want to use your own LLM for various reasons, such as:

  • You have a custom LLM that you have trained on your own data.
  • You have a license to use a specific LLM that your company has purchased.
  • You want to use a specific LLM that we do not provide.

In these cases, you can have your own LLM generate responses for your agents at Play.ai. We provide an API that allows you to configure your agent with your own LLM, as long as it is OpenAI API-Compatible, and a simple API call to update your agent is all you need to start using it.

How to bring your own LLM

To bring your own LLM, you need:

To bring your own OpenAI API-Compatible LLM, you will need to provide us with and . We will use this information to communicate with your LLM when your agent needs to generate responses.

Updating an agent to use your own LLM

Once you have created an agent, you can update it with your LLM. To do so, call our Update Agent endpoint with the llm field set.

For a full step-by-step guide on how to bring your own LLM, see the tutorial below.

For example, assuming agentId is the ID of an agent you have created via our Web UI or through our Create Agent endpoint, the request below would update the agent:

PATCH https://api.play.ai/api/v1/agents/{{agentId}}
Authorization: Bearer {{yourApiKey}}
X-User-Id: {{yourUserId}}
Content-Type: application/json

{
  "llm": {
    "baseURL": "https://my.own.llm.api.example.com",
    "apiKey": "sk-the-api-key-for-the-llm-api"
  }
}

For a full step-by-step guide on how to bring your own LLM, make sure to check the tutorial below.


Frequently Asked Questions

Can I set custom headers or other configurations?

Yes, you can set custom headers or other configurations through . Current configuration options include: , , , and more. Check the details on the llm field in the Update Agent or Create Agent API pages.

What are the compatibility requirements for my LLM API?

We expect the LLM URL provided at to be OpenAI API-Compatible. In other words, it must be an API that serves an endpoint at /chat/completions that returns an SSE response when prompted. Typically, these APIs can be used directly with OpenAI’s or LangChain’s SDKs.

Below, you can find an example of a request and response to an OpenAI-Compatible API.

POST https://my.own.llm.api.example.com/chat/completions
Authorization: Bearer sk-the-api-key-for-the-llm-api
Content-Type: application/json

{
  "model": "gpt-3.5-turbo",
  "messages": [
    {
      "role": "user",
      "content": "Repeat with me: the smoke test has passed."
    }
  ],
  "stream": true
}

This is, therefore, the contract your LLM’s API must fulfill to be compatible with Play.ai.

How can I test if my LLM API is compatible?

Besides manually verifying if your API returns a response similar to the one mentioned above, , our backend will test your API for compatibility before returning a successful response.

If the fails the compatibility check, our servers will return a 400 error with a message describing the issues found.

How to use my own RAG and integrations?

Currently, you can use the built-in RAG and integrations provided by Play.ai, having your LLM work only as a response generator.

Alternatively, it is entirely possible to use your own RAG and integrations under the hood of your LLM API, having your LLM interact with them before generating responses.

We are working on supporting custom RAGs and integrations, in a similar way to how we support custom LLMs. Stay tuned for updates on this page.



Bring Your Own LLM Tutorial

To bring your own LLM, you need:

To bring your own LLM to Play.ai, follow the steps below:

1

Create your Play.ai account and get your User ID and API Key

2

Create your agent

To set the LLM, you will need the ID of an existing agent, e.g. myagent-123tVPa7XMBX5Dmym613j. Your existing agents (created via the API or the Web UI) can be found in your my agents page.

To create a new agent, you can use our Web UI or our Create Agent (POST) endpoint.

Agent creation via the Web UI or the API has the same effect. You can use either method to create an agent that will use your LLM. You can also edit an agent created via the Web UI using the API and vice versa.
3

Update the agent with your LLM

Creating an agent through the API with the llm field set would have the same effect described in this step.

Finally, you need to have an LLM that provides an OpenAI-Compatible API and call our endpoint to update your agent.

If you don’t have a compatible LLM right now — or just want to try our API out —, you can use a Fake LLM we have built for this tutorial. Visit and fork https://replit.com/@antonio-play/fake-llm

With your URL and API key in hand, you can update your agent with your LLM. To do this, call our Update Agent endpoint with the llm field set, as described next.

Assuming agentId is the ID of an agent — created either via our Web UI or through our Create Agent endpoint —, the request below would update the agent:

PATCH https://api.play.ai/api/v1/agents/{{agentId}}
Authorization: Bearer {{yourApiKey}}
X-User-Id: {{yourUserId}}
Content-Type: application/json

{
  "llm": {
    "baseURL": "https://my.own.llm.api.example.com",
    "apiKey": "sk-the-api-key-for-the-llm-api"
  }
}

If you get a 200 response, your agent is now configured to use your LLM. The new settings will affect all conversations started after the update.

There are checks in place: When updating the llm property of an agent (or creating a new one with llm set), our backend will test your API for compatibility before returning a successful response. If the fails the compatibility check, our servers will return a 400 error with a message describing the issues found.

4

Try it out

To test your agent, you can talk to it using our Web UI at the direct URL https://play.ai/agent/{{yourAgentId}} or you can leverage our Websocket API.

Agents created via the API also appear in the Agents list page at the Web UI. You can click on the agent there to access the direct URL and have conversations.