Audio - Submit New Audio File (wav/mp3)
POST {{base_url}}/v1/process/audio
Submit Audio File
The Async Audio API allows you to process an audio file.
It can be utilized for any use case where you have access to recorded audio and want to extract insights and other conversational attributes supported by Symbl's Conversation API.
Use this API to upload your file and generate a conversationId
. If you want to append additional audio information to the same conversationId
Learn More about Async Audio API.
Request Body
The binary payload of a file audio file.
Notice that the content type is
binary, which allows you to select a file you want to upload.
Query Params
Parameters | Required | Description |
---|---|---|
name | No | Your meeting name. Default name set to conversationId . |
webhookUrl | No | Webhook url on which job updates to be sent. This should be POST endpoint. |
customVocabulary | No | Contains a list of words and phrases that provide hints to the speech recognition task. |
entities | No | Input custom entities which can be detected in your conversation using Entities' API. For example, check the sample code on right. |
detectPhrases | No | Accepted values are true & false . It shows Actionable Phrases in each sentence of conversation. These sentences can be found in the Conversation's Messages API. |
enableSeparateRecognitionPerChannel | No | Enables Speaker Separated Channel audio processing. Accepts true or false . |
channelMetadata | No | This object parameter contains two variables speaker and channel to specific which speaker corresponds to which channel. This object only works when enableSeparateRecognitionPerChannel query param is set to true . |
languageCode | No | We accept different languages. Please check language Code as per your requirement. |
Response
In response, conversationId and jobId are returned.
jobId
can be used to get updates on the job status.
conversationId
can be used with the Conversation API to get all the insights, topics and processed messages etc.
Webhook Payload
webhookUrl
will be used to send the status of job created for uploaded audio. Every time the status of the job changes it will be notified on the webhookUrl
Field | Description |
---|---|
jobId | ID to be used with Job API |
status | Current status of the job. (Valid statuses - [ scheduled, in_progress, completed, failed ]) |
Request Params
Key | Datatype | Required | Description |
---|---|---|---|
name | string | Your meeting name. Default name is set to conversationId. | |
customVocabulary | string | Contains a list of words and phrases that provide hints to the speech recognition task. | |
confidenceThreshold | string | Minimum required confidence for the insight to be recognized. The range is from 0.0 to 1.0. Default value 0.5.I | |
detectEntities | boolean | Contains a list of words and phrases that provide hints to the speech recognition task. | |
detectPhrases | boolean | It shows Actionable Phrases in each sentence of conversation. These sentences can be found in the Conversation's Messages API. | |
languageCode | string | If not set to true the Entities API will not return any entities from the conversation . | |
mode | string | 'phone' mode is best for audio that is generated from phone call(which is typically recorded at 8khz sampling rate). | |
'default' mode works best for audio generated from video or online meetings(which is typically recorded at 16khz or more sampling rate). | |||
When you don't pass this parameter default is selected automatically. | |||
trackers | string | We accept different languages. They can be found here: https://docs.symbl.ai/docs/async-api/overview/async-api-supported-languages | |
startTime | string | Start time of the meeting | |
features | string | Features list ['insights', 'callScore'] | |
metadata | string | Metadata for symbl features |
HEADERS
Key | Datatype | Required | Description |
---|---|---|---|
x-api-key | string |