# Get Agent Stats GET /api/v1/agent-stats/{agentId} Returns all available statistics for a specific agent. # Get Agent Conversations GET /api/v1/agents/{agentId}/conversations Get all conversations for an agent. Retrieve all information about an agent's conversations. ### Response Headers for Pagination | Header Name | Type | Description | | -------------------- | ------- | --------------------------------------------- | | `X-Page-Size` | integer | The number of items per page. | | `X-Start-After` | string | The ID of the last item on the previous page. | | `X-Next-Start-After` | string | The ID of the last item on the current page. | | `X-Total-Count` | integer | The total number of items. | These headers are included in the response to help manage pagination when retrieving conversations for a specific agent. # Get Agent Conversation GET /api/v1/agents/{agentId}/conversations/{conversationId} Get a conversation for an agent. Retrieve all information about an agent conversation. # Get Agent Conversation Transcript GET /api/v1/agents/{agentId}/conversations/{conversationId}/transcript Get a conversation transcript for an agent. Retrieve the transcript of a specific agent conversation. ### Response Headers for Pagination | Header Name | Type | Description | | -------------------- | ------- | --------------------------------------------- | | `X-Page-Size` | integer | The number of items per page. | | `X-Start-After` | string | The ID of the last item on the previous page. | | `X-Next-Start-After` | string | The ID of the last item on the current page. | | `X-Total-Count` | integer | The total number of items. | These headers are included in the response to help manage pagination when retrieving conversation transcript for a specific agent conversation. # Get Agent GET /api/v1/agents/{agentId} Retrieve all information about an agent. Retrieve all information about an agent. # Update Agent PATCH /api/v1/agents/{agentId} Updates a Play.ai Agent. Updates the properties of the agent with the specified ID. # Create Agent POST /api/v1/agents Creates a new Play.ai Agent. Use this endpoint to create new agents. Required parameters include the agent's name and the agent's prompt. After you create your agent, you can proceed to start a conversation using our [Websocket API](/api-reference/websocket), or you can try it out through our web interface at `https://play.ai/agent/`. To update the agents see the [Update Agent](/api-reference/endpoints/v1/agents/patch) endpoint. # Delete External Function Delete /api/v1/external-functions/{functionId} Delete an external function. Deletes the external function with the specified ID. # Get All External Functions GET /api/v1/external-functions Get all external functions. Retrieve all information about all external functions that you have created. # Get External Function GET /api/v1/external-functions/{functionId} Get an external function. Retrieve all information about the external function with the specified ID. # Update External Function PATCH /api/v1/external-functions/{functionId} Update an external function. Updates the properties of the external function with the specified ID. # Create External Function POST /api/v1/external-functions Create a new external function. Use this endpoint to create new external functions. Required parameters include the external function's name and the external function's description. After you create your agent, you can attach the external function to an agent. To update the external functions see the [Update External Function](/api-reference/endpoints/v1/external-functions/patch) endpoint. # Introduction HTTP API endpoints Play.ai provides a simple and easy to use HTTP API to create and manage AI Agents. After you create your agent, you can proceed to start a conversation using our [Websocket API](/api-reference/websocket), or you can try it out through our web interface at `https://play.ai/agent/`. ## Authentication All API endpoints are authenticated using a User ID and API Key. After you have created an account and logged in, you can get your API Key from the [For Developers](https://play.ai/developers) page. # Websocket API Enhance your app with our audio-in, audio-out API, enabling seamless, natural conversations with your PlayAI agent. Transform your user experience with the power of voice. To use our WebSocket, you will need beforehand: * A [Play.ai account](https://play.ai/pricing) * An [API key to authenticate](https://play.ai/developers) with the Play.ai API * An agent ID of a Play.ai Agent (created via our [Web UI](https://play.ai/my-agents) or our [Create Agent endpoint](/api-reference/endpoints/v1/agents/post)) To fully leverage our WebSocket API, the steps are: * Connect to our `wss://api.play.ai/v1/talk/` URL * Send a `{"type":"setup","apiKey":"yourKey"}` message as first message * Send audio input as base64 encoded string in `{"type":"audioIn","data":"base64Data"}` messages * Receive audio output in `{"type":"audioStream","data":"base64Data"}` messages # Establishing a Connection To initiate a conversation, establish a websocket connection to our `talk` URL, including the `agentId` as a path parameter: ```text wss://api.play.ai/v1/talk/ ``` For example, assuming `Agent-XP5tVPa8GDWym6j` is the ID of an agent you have created via our [Web UI](https://play.ai/my-agents) or through our [Create Agent endpoint](/api-reference/endpoints/v1/agents/post), the WebSocket URL should look like: ```js const myWs = new WebSocket('wss://api.play.ai/v1/talk/Agent-XP5tVPa8GDWym6j'); ``` # Initial Setup Message Before you can start sending and receiving audio data, you must first send a `setup` message to authenticate and configure your session. ```mermaid graph TB subgraph "conversation" C --> D[Send 'audioIn' messages containing your user's audio data] D --> C end B --> C[Receive 'audioStream' messages containing Agent's audio data] subgraph setup A[Establish WebSocket Connection] --> B[Send 'setup' message] end ``` The only required field in the setup message is the `apiKey`. This assumes you are comfortable with the default values for audio input and audio output formats. In this scenario, your first setup message could be as simple as: ```json { "type": "setup", "apiKey": "yourKey" } ``` Get your API Key at our [Developers](https://play.ai/developers) page Code example: ```js const myWs = new WebSocket('wss://api.play.ai/v1/talk/Agent-XP5tVPa8GDWym6j'); myWs.onopen = () => { console.log('connected!'); myWs.send(JSON.stringify({ type: 'setup', apiKey: 'yourApiKey' })); }; ``` ## Setup Options The setup message configures important details of the session, including the format/encoding of the audio that you intend to send us and the format that you expect to receive. ```json Example setup messages with various options: // mulaw 16KHz as input { "type": "setup", "apiKey": "...", "inputEncoding": "mulaw", "inputSampleRate": 16000 } // 24Khz mp3 output { "type": "setup", "apiKey": "...", "outputFormat": "mp3", "outputSampleRate": 24000 } // mulaw 8KHz in and out { "type": "setup", "apiKey": "...", "inputEncoding": "mulaw", "inputSampleRate": 8000, "outputFormat": "mulaw", "outputSampleRate": 8000 } ``` The following fields are available for configuration:
Property Accepted values Description Default value
`type`
(required)
`"setup"` Specifies that the message is a setup command. -
`apiKey`
(required)
`string` [Your API Key](https://play.ai/developers). -
`outputFormat`
(optional)
* `"mp3"` * `"raw"` * `"wav"` * `"ogg"` * `"flac"` * `"mulaw"` The format of audio you want our agent to output in the `audioStream` messages. * `mp3` = 128kbps MP3 * `raw` = PCM\_FP32 * `wav` = 16-bit (uint16) PCM * `ogg` = 80kbps OGG Vorbis * `flac` = 16-bit (int16) FLAC * `mulaw` = 8-bit (uint8) PCM headerless `"mp3"`
`outputSampleRate`
(optional)
`number` The sample rate of the audio you want our agent to output in the `audioStream` messages `44100`
`inputEncoding`
(optional)
For non-headerless formats: `"media-container"` For headerless formats: * `"mulaw"` * `"linear16"` * `"flac"` * `"amr-nb"` * `"amr-wb"` * `"opus"` * `"speex"` * `"g729"` The encoding of the audio you intend to send in the `audioIn` messages. If your are sending audio formats that use media containers (that is, audio that contain headers, such as `mp4`, `m4a`, `mp3`, `ogg`, `flac`, `wav`, `mkv`, `webm`, `aiff`), just use `"media-container"` as value for `inputEncoding` (or don't pass any value at all since `"media-container"` is the default). This will instruct our servers to process the audio based on the data headers. If, on the other hand, you will send us audio in headerless formats, you have to specify the format you will be sending. In this case, specify it by, e.g., setting `inputEncoding` to `"mulaw"`, `"flac"`, etc. `"media-container"`
`inputSampleRate`
(optional)
`number` The sample rate of the audio you intend to send. Required if you are specifying an `inputEncoding` different than `"media-container"`. Optional, otherwise -
`customGreeting`
(optional)
`string` Your agent will say this message to start every conversation. This overrides the agent's greeting. -
`prompt`
(optional)
`string` Give instructions to your AI about how it should behave and interact with others in conversation. This is appended to the agent's prompt. `""`
`continueConversation`
(optional)
`string` If you want to continue a conversation from a previous session, pass the `conversationId` here. The agent will continue the conversation from where it left off. -


# `audioIn`: Sending Audio Input After the setup, you can send audio input in the form of an `audioIn` message. The audio must be sent as a base64 encoded string in the `data` field. The message format is: ```json { "type": "audioIn", "data": "" } ``` The audio you send must match the `inputEncoding` and `inputSampleRate` you configured in the setup options. ## Sample Code for Sending Audio Assuming `myWs` is a WebSocket connected to our `/v1/talk` endpoint, the sample code below would send audio directly from the browser: ```javascript const stream = await navigator.mediaDevices.getUserMedia({ audio: { channelCount: 1, echoCancellation: true, autoGainControl: true, noiseSuppression: true, }, }); const mediaRecorder = new MediaRecorder(stream); mediaRecorder.ondataavailable = async (event) => { const base64Data = await blobToBase64(event.data); // Relevant: myWs.send(JSON.stringify({ type: 'audioIn', data: base64Data })); }; async function blobToBase64(blob) { const reader = new FileReader(); reader.readAsDataURL(blob); return new Promise((resolve) => { reader.onloadend = () => resolve(reader.result.split(',')[1]); }); } ``` # `audioStream`: Receiving Audio Output Audio output from the server will be received in an `audioStream` message. The message format is: ```json { "type": "audioStream", "data": "" } ``` The audio you receive will match the `outputFormat` and `outputSampleRate` you configured in the setup options. ## Sample Code for Receiving Audio ```javascript myWs.on('message', (message) => { const event = JSON.parse(message); if (event.type === 'audioStream') { // deserialize event.data from a base64 string to binary // enqueue/play the binary data at your player return; } }); ``` # Voice Activity Detection: `voiceActivityStart` and `voiceActivityEnd` During the conversation, you will receive `voiceActivityStart` and `voiceActivityEnd` messages indicating the detection of speech activity in the audio input. These messages help in understanding when the user starts and stops speaking. When our service detects that the user started to speak, it will emit a `voiceActivityStart` event. Such a message will have the format: ```json { "type": "voiceActivityStart" } ``` It is up to you to decide how to react to this event. We highly recommend you stop playing whatever audio is being played, since the `voiceActivityStart` generally indicates the user wanted to interrupt the agent. Similarly, when our service detects that the user stopped speaking, it emits a `voiceActivityEnd` event: ```json { "type": "voiceActivityEnd" } ``` # `newAudioStream`: Handling New Audio Streams A `newAudioStream` message indicates the start the audio of a new response. It is recommended to clear your player buffer and start playing the new stream content upon receiving this message. This message contains no additional fields. # Error Handling Errors from the server are sent as `error` message type, a numeric code and a message in the following format: ```json { "type": "error", "code": , "message": "" } ``` The table below provides a quick reference to the various error codes and their corresponding messages for the Agent Websocket API. | Error Code | Error Message | | ---------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | 1001 | Invalid authorization token. | | 1002 | Invalid agent id. | | 1003 | Invalid authorization credentials. | | 1005 | Not enough credits. | | 4400 | Invalid parameters. Indicates the message sent to the server failed to match the expected format. Double check the logic and try again. | | 4401 | Unauthorized. Invalid authorization token or invalid authorization credentials for specified agent. | | 4429 | You have reached the maximum number of concurrent connections allowed by your current plan. Please consider upgrading your plan or reducing the number of active connections to continue. | | 4500 | Generic error code for internal errors, such as failures to generate responses.
Generally, the user is not at fault when these happen. An appropriate reaction is to wait a few moments and try again. If the problem persists, contacting support is advised. | *** This documentation covers the essential aspects of interacting with the PlayAI Websocket API for agent conversations. Ensure that your implementation handles the specified message types and follows the outlined protocols for a seamless integration. # Crawl a Website ## Introduction The play.ai web embed can be used to crawl a website and answer questions about the content and navigate the user to the relevant pages. ## Crawl a website On the last page of the create/edit agent flow, check the "Crawl website" checkbox and enter the URL of the website you want to crawl. Enable crawling Stay on the page until the crawling process is complete. This can take a while if the website has many pages. Click 'Crawl' Your embed should now be able to answer questions about the crawled website and navigate the user to relevant pages on the site. Click 'Crawl' ## Delete a crawled website Go into the "Knowledge" tab of the create/edit agent flow and click the trash icon next to the website you want to delete under the "Custom Knowledge" section. Delete website # Embedding an Agent on your Website ## Introduction The play.ai web embed allows you to integrate AI-powered interactions into your web application. This guide will walk you through the process of setting up and using the play.ai web embed in your React application. ## Installation First, install the `@play-ai/agent-web-sdk` package: ```bash npm install @play-ai/agent-web-sdk ``` ## Basic Usage Here's a step-by-step guide to implement the play.ai web embed in your React application: If you haven't already, create an agent on [https://play.ai](https://play.ai). ```javascript import { useEffect, useState } from 'react'; import { open as openEmbed } from '@play-ai/agent-web-sdk'; ``` ```javascript const webEmbedId = 'YOUR_WEB_EMBED_ID'; ``` Replace `YOUR_WEB_EMBED_ID` with the actual ID provided by play.ai, which can be found in the Agent editor. Web Embed ID ```javascript const events = [ { name: "change-text", when: "The user says what they want to change the text on the screen to", data: { text: { type: "string", description: "The text to change to" }, }, }, ] as const; ``` Custom events are optional, but they allow you to define custom behavior for your agent, to allow it to execute javascript and interact with the page. Learn more about custom events here. This example defines a single event called "change-text" that will be triggered when the user specifies the text they want to display. ```javascript const onEvent = (event: any) => { console.log("onEvent: ", event); if (event.name === "change-text") { setText(event.data.text); } }; ``` If you define custom events, you must implement an event handler to handle the events. Learn more about the custom event handler here. This handler logs the event and updates the text state when a "change-text" event is received. ```javascript useEffect(() => { openEmbed(webEmbedId, { events, onEvent }); }, []); ``` This `useEffect` hook initializes the web embed when the component mounts. See all the parameters that can be passed to `openEmbed()` here. ```jsx return ( <>
{text}
); ``` This example renders the current text in the center of the page.
## Full Example Here's the complete example of a React component using the play.ai web embed. ```jsx "use client"; import { useEffect, useState } from "react"; import { open as openEmbed } from "@play-ai/agent-web-sdk"; const webEmbedId = "YOUR_WEB_EMBED_ID"; export default function Home() { const [text, setText] = useState("Change this text"); const events = [ { name: "change-text", when: "The user says what they want to change the text on the screen to", data: { text: { type: "string", description: "The text to change to" }, }, }, ] as const; const onEvent = (event: any) => { console.log("onEvent: ", event); if (event.name === "change-text") { setText(event.data.text); } }; useEffect(() => { openEmbed(webEmbedId, { events, onEvent }); }, []); return ( <>
{text}
); } ``` View the live example [here](https://play.ai/embed/demo/change-text). View the code [here](https://github.com/playht/web-embed-examples/blob/main/change-text/nextjs/app/page.tsx). ## Customization You can customize the behavior of your AI agent by modifying the agent greeting and prompt. In this example, the agent is instructed to change the text on the page and end the call immediately after doing so. Web Embed ID ## Other examples * [View other examples](/documentation/client-sdks/web-embed-examples) ## Conclusion This guide demonstrates how to integrate the play.ai web embed into your React application. You can extend this functionality by adding more events and customizing the agent's behavior to suit your specific needs. ## Next Steps * [Learn more about custom events](/documentation/client-sdks/web-embed) # Web Embed API for embedding a play.ai agent on your website This document provides detailed information about the key components of the play.ai web embed API: the events array, the onEvent handler, and the openEmbed function. ## Installation ```bash npm install @play-ai/agent-web-sdk ``` ## `openEmbed` Function The `openEmbed` function initializes and opens the play.ai web embed on the webpage. It is imported from the `@play-ai/agent-web-sdk` package as follows ```typescript import { open as openEmbed } from '@play-ai/agent-web-sdk'; ``` It has the following signature: ```typescript function openEmbed( webEmbedId: string, options: { events?: ReadonlyArray; onEvent?: OnEventHandler; prompt?: string; customGreeting?: string; }, ): { setMinimized: (minimize?: boolean) => void }; ``` ### Parameters * `webEmbedId`: A string representing your unique web embed ID provided by play.ai. * `options`: An object containing: * `events?`: The array of custom events your application can handle. * `onEvent?`: The event handler function. * `customGreeting?`: A custom greeting that replaces the default greeting. * `prompt?`: A prompt to give the agent in addition to the default prompt. Use this to give context that is page-specific or user-specific, e.g. "The form fields on the current page are X, Y, and Z". ### Return type * `setMinimized(minimize?: boolean)`: A function that allows you to toggle the minimized state of the web embed. Pass in `true` to minimize and `false` to maximize the web embed. Toggle the minimize state by passing in `undefined`. ### Example ```javascript import { open as openEmbed } from '@play-ai/agent-web-sdk'; // ... (events and onEvent definitions) ... useEffect(() => { const { setMinimized } = openEmbed(webEmbedId, { events, onEvent, customGreeting: "Let's fill out this form together!", prompt: 'The form fields on the current page are name, email, and shipping address', }); }, []); ``` In this example, the openEmbed function is called inside a useEffect hook to initialize the web embed when the component mounts. ## Events Array The events array defines the custom events that your application can handle. Each event in the array is an object with the following structure: ```typescript type Event = { name: string; when: string; data: { [key: string]: { type: string; description?: string; values?: string[]; }; }; }; ``` ### Properties * `name`: A string that uniquely identifies the event. * `when`: A string describing the condition that triggers this event. * `data`: An object describing the data that should be passed to the event handler. Each key in this object represents the name of the data field, and its value is an object with: * `type`: The data type of the field (currently supports `string`, `number`, `boolean`, and `enum`). * `description?`: A brief description of what this data field represents. * `values?`: An array of strings representing the possible values for the field if the type is `enum`. ### Example ```javascript const events = [ { name: "change-text", when: "The user says what they want to change the text on the screen to", data: { text: { type: "string", description: "The text to change to" }, }, }, ] as const; ``` ## onEvent Handler The onEvent handler is a function that processes events triggered by the play.ai web embed. It has the following signature: ```typescript type OnEventHandler = (event: { name: string; data: Record }) => void; ``` ### Parameters * `event`: An object containing: * `name`: The name of the triggered event (matching an event name from the events array). * `data`: An object containing the data associated with the event (matching the data structure from the events array). ### Example ```javascript const onEvent = (event) => { console.log('onEvent: ', event); if (event.name === 'change-text') { setText(event.data.text); } }; ``` In this example, the handler logs all events and updates the text state when a "change-text" event is received. ## Putting It All Together Here's how these components work together: 1. You define your custom events in the events array. 2. You implement the onEvent handler to process these events. 3. You call the openEmbed function, passing your web embed ID, the events array, the onEvent handler, and optionally customGreeting and prompt. 4. When a user interacts with the AI agent, it may trigger one of your defined events. 5. The onEvent handler receives the triggered event and processes it according to your implementation. This structure allows for flexible, event-driven interactions between your web application and the play.ai web embed. [Learn more about how to use the web embed API in this guide](/documentation/client-sdks/embedding-an-agent-on-your-website). # Web Embed Examples ## Form filling Assists a user in filling out a form. Showcases the ability to pass a custom prompt to the web embed from javascript. [Live example](https://play.ai/embed/demo/form-filling) [Code](https://github.com/playht/web-embed-examples/blob/main/form-filling/nextjs/app/page.tsx) Form Filling Demo ## Minimize web embed Showcases the ability to minimize the web embed from javascript. [Live example](https://play.ai/embed/demo/minimize) [Code](https://github.com/playht/web-embed-examples/blob/main/minimize-embed/nextjs/app/page.tsx) Minimize Web Embed Demo ## Image generation Generates an image based on the user's description. [Live example](https://play.ai/embed/demo/image-gen) [Code](https://github.com/playht/web-embed-examples/blob/main/image-gen/nextjs/app/page.tsx) Image Generation Demo ## Music mood Plays music based on the user's mood. [Live example](https://play.ai/embed/demo/music-mood) Music Mood Demo ## Color math Game where the user has to guess the color that is created by mixing two other colors. [Live example](https://play.ai/embed/demo/color-math) ## Color painter Changes the background color of the webpage based on the user's description. [Live example](https://play.ai/embed/demo/color-painter) # Web SDK Integrate Play.ai Agent into your web application ## Overview The **Agent Web SDK** is a TypeScript SDK that facilitates real-time, bi-directional audio conversations with your Play.ai Agent via [WebSocket API](/api-reference/websocket). It takes care of the following: * WebSocket connection management * Microphone capture and voice activity detection (VAD) * Sending user audio to the Agent * Receiving Agent audio and playing it back in the browser * Managing event listeners such as user or agent transcripts * Muting/unmuting the user's microphone * Hanging up (ending) the agent conversation * Error handling This SDK is designed for modern web browsers that support the [Web Audio API](https://developer.mozilla.org/en-US/docs/Web/API/Web_Audio_API) and [WebSockets](https://developer.mozilla.org/en-US/docs/Web/API/WebSocket). If you want to integrate Play.ai Agent into a mobile app or a different platform, please refer to the [WebSocket API documentation](/api-reference/websocket). ## Installation ```bash npm npm install @play-ai/agent-web-sdk ``` ```bash yarn yarn add @play-ai/agent-web-sdk ``` ```bash pnpm pnpm add @play-ai/agent-web-sdk ``` ## Create agent To start a conversation with your agent, first create an agent in [Play.ai app](https://play.ai/my-agents). Once you have an agent, you can find the agent ID in the agent "Deploy · Web" section, which is required to connect to the agent. ## Basic usage Below is a simple example illustrating how to initiate a conversation with your agent using the `connectAgent` function: ```ts import { connectAgent } from '@play-ai/agent-web-sdk'; async function startConversation() { try { const agentController = await connectAgent('YOUR_AGENT_ID'); console.log('Connected to agent. Conversation ID:', agentController.conversationId); // Use agentController to control the conversation... } catch (error) { console.error('Failed to start conversation:', error); } } startConversation(); ``` The function `connectAgent` returns a Promise * If any error occurs during the connection process, the Promise is rejected. * When the conversation is successfully established, the Promise resolves to `AgentConnectionController` object. ## Config You can customize the agent’s configuration by passing an optional `ConnectAgentConfig` object as the second parameter to `connectAgent`. ```ts const agentController = await connectAgent('YOUR_AGENT_ID', { debug: true, // Enable debug logging in the console customGreeting: 'Hello, and welcome to my custom agent!', // Override the default greeting prompt: 'You are an AI that helps with scheduling tasks.', // Append additional instructions to the agent's prompt continueConversation: 'PREVIOUS_CONVERSATION_ID', // Continue a previous conversation }); ``` **Config Options**: * **`debug`:** Enables debug logging for troubleshooting. * **`customGreeting`:** Overrides the default greeting used by the agent. * **`prompt`:** Appends additional instructions to the agent’s core prompt. * **`continueConversation`:** An optional conversation ID to continue a previous conversation. * **`listeners`:** Attach various listener callbacks (see [Event listeners](#event-listeners) section). ## Event listeners Event listeners enable you to handle specific moments during the conversation: ```ts const agentController = await connectAgent('YOUR_AGENT_ID', { listeners: { onUserTranscript: (transcript) => console.log(`USER said: "${transcript}".`), onAgentTranscript: (transcript) => console.log(`AGENT will say: "${transcript}".`), onUserStartedSpeaking: () => console.log(`USER started speaking...`), onUserStoppedSpeaking: () => console.log(`USER stopped speaking.`), onAgentDecidedToSpeak: () => console.log(`AGENT decided to speak... (not speaking yet, just thinking)`), onAgentStartedSpeaking: () => console.log(`AGENT started speaking...`), onAgentStoppedSpeaking: () => console.log(`AGENT stopped speaking.`), onHangup: (endedBy) => console.log(`Conversation has ended by ${endedBy}`), onError: (err) => console.error(err), }, }); ``` ## Mute/unmute Once you have an active `AgentConnectionController` from `connectAgent`, you can mute or unmute the user's microphone: ```ts const agentController = await connectAgent('YOUR_AGENT_ID'); agentController.mute(); // The agent won't hear any mic data agentController.unmute(); // The agent hears the mic data again ``` ## Hangup Use `agentController.hangup()` to end the conversation from the user side. ```ts const agentController = await connectAgent('YOUR_AGENT_ID'); setTimeout(() => { // End the conversation after 60 seconds agentController.hangup(); }, 60000); ``` When the conversation ends (either by user or agent), the `onHangup` callback (if provided) is triggered. ## Error handling Errors can occur at different stages of the conversation: * Starting the conversation. For example: * Microphone permissions denied * WebSocket fails to connect or closes unexpectedly * Invalid agent ID * During the conversation. For example: * Agent fails to generate a response * Internal Agent errors * Network issues Errors that occur before the conversation starts are caught by the `connectAgent` Promise. You can handle these errors in the `catch` block. Errors that occur during the conversation are caught by the `onError` listener. ```ts import { connectAgent } from '@play-ai/agent-web-sdk'; async function startConversation() { try { const agentController = await connectAgent('YOUR_AGENT_ID', { listeners: { onError: (error) => { console.error('Error occurred:', error.description); if (error.isFatal) { // Possibly reconnection logic or UI error message } }, }, }); } catch (err) { console.error('Failed to start the conversation:', err); } } ``` **Error object**: ```ts interface ErrorDuringConversation { description: string; // Human-readable message isFatal: boolean; // Whether the error ended the conversation serverCode?: number; // If the server gave a specific error code wsEvent?: Event; // Low-level WebSocket event cause?: Error; // JS error cause } ``` ## Code example