This endpoint allows you to diarize a note based on the provided audio file.
application/json
multipart/form-data
curl -X 'POST' \\
'<https://api.neurocare.ai/api/v1/note/diarize/web/[?user_id=cwfqwfq&email=wqfqwf](https://prod-aizamd-routing.neurocare.ai/api/v1/note/diarize/web/?user_id=cwfqwfq&email=wqfqwf)>' \\
-H 'accept: application/json' \\
-H 'Content-Type: multipart/form-data' \\
-H 'x-api-key: c0b38d8c-7004-4d16-95b9-98934d3a0775' \\
-F '[email protected];type=audio/webm'
The x-api-key
provided in the example code above is a generic placeholder. Please substitute it with the specific x-api-key
provided to you via email.
application/json
{
"note_id": "string",
"user_id": "string",
"note_title": "string",
"date_modified": "2024-04-17T15:34:42.190Z",
"is_starred": false,
"is_deleted": false,
"raw_note": {},
"ai_improved_note": "This is the field of interest. The SOAP note will go here. You will need to parse desired sections (HPI/Subjective/Objective/Assesment and Plan etc) out using regex or \\
simple string splitting mechanism",
"icd10_codes": [
{
"additionalProp1": "string",
"additionalProp2": "string",
"additionalProp3": "string"
}
],
"previous_soap_notes": []
}
{"detail": "Not found"}
application/json
{
"detail": [
{
"loc": ["string", 0],
"msg": "string",
"type": "string"
}
]
}
{"detail": "Not authenticated"}
BitBucket Link:
The detailed description of the web-based on angular and react snippets are as follows:
The AppComponent
in this Angular application serves to control and manage audio recording operations, as well as interact with an external API to handle recorded data. It features reactive UI changes to provide feedback to the user based on the recording state. Below is the detailed documentation of its setup and functionalities.
app-root
RouterOutlet
from @angular/router
for routing capabilities.HttpClientModule
and HttpClient
from @angular/common/http
for making HTTP requests.CommonModule
from @angular/common
for common Angular directives and pipes../app.component.html
./app.component.css
title
: Static title string 'recoding-button'.mediaRecorder
: Holds the MediaRecorder
object to control audio recording. It is optional and can be undefined.chunks
: An array to store chunks of audio data captured during recording.isRecording
: Boolean flag to track whether recording is in progress.newOnScreen
: Boolean flag to manage initial UI state.HttpClient
to facilitate HTTP requests.toggleRecording()
This method toggles the recording state. It handles starting and stopping of the audio recording using the MediaRecorder
API. It also manages UI updates based on the recording state.
MediaRecorder
.MediaRecorder
.Blob
, then wraps it in a FormData
object to be sent via HTTP POST.Captures and logs errors, particularly when accessing the microphone is denied or fails.
The template contains a button that toggles the recording state visually using dynamic Angular templates (ngIf
, ng-template
) to display different icons based on the recording state.
newOnScreen
).Below the button, a paragraph dynamically displays the current status ('Active', 'Recording', 'Paused') based on the component's state.
Place this component in any part of your Angular application where you need audio recording functionality, ensuring the necessary assets (images) and endpoints are correctly set up and accessible.
Refer to app.component.css
for specific styling details. Ensure loader animations and images are styled to match the application's design requirements.
This component serves as a self-contained unit for managing audio interactions within your Angular application, providing robust functionality paired with a reactive and user-friendly interface.
BitBucket Link:
This documentation provides an overview of integrating an audio recording functionality using HTML, CSS, and JavaScript. The application allows users to record audio through their web browser's microphone and then send the recorded audio to a server for processing.
index.html
- The HTML file that hosts the recording button and status icon.styles.css
- The stylesheet for styling the recording button and icons.script.js
- The JavaScript file that handles the audio recording logic.MediaRecorder
API to capture audio from the user's microphone and stores chunks of the audio data in an array.Blob
, sent to the server using a POST
request with necessary user information.pip3 install twilio
import base64
import json
import logging
from fastapi import FastAPI, WebSocket, WebSocketDisconnect
import uvicorn
app = FastAPI()
HTTP_SERVER_PORT = 5000
logging.basicConfig(level=logging.DEBUG)
@app.websocket("/media")
async def websocket_endpoint(websocket: WebSocket):
await websocket.accept()
app.logger.info("Connection accepted")
has_seen_media = False
message_count = 0
try:
while True:
message = await websocket.receive_text()
if message is None:
app.logger.info("No message received...")
continue
data = json.loads(message)
if data['event'] == "connected":
app.logger.info(f"Connected Message received: {message}")
elif data['event'] == "start":
app.logger.info(f"Start Message received: {message}")
elif data['event'] == "media":
if not has_seen_media:
app.logger.info(f"Media message: {message}")
payload = data['media']['payload']
app.logger.info(f"Payload is: {payload}")
chunk = base64.b64decode(payload)
app.logger.info(f"That's {len(chunk)} bytes")
app.logger.info("Additional media messages from WebSocket are being suppressed....")
has_seen_media = True
elif data['event'] == "closed":
app.logger.info(f"Closed Message received: {message}")
break
message_count += 1
except WebSocketDisconnect:
app.logger.info("Connection closed by the client.")
except Exception as e:
app.logger.error(f"Error: {e}")
app.logger.info(f"Connection closed. Received a total of {message_count} messages")
await websocket.close()
if __name__ == '__main__':
uvicorn.run(app, host="0.0.0.0", port=HTTP_SERVER_PORT)
<?xml version="1.0" encoding="UTF-8"?>
<Response>
<Start>
<Stream url="wss://yourdomain/media" />
</Start>
<Dial>+15550123456</Dial>
</Response>
You’ll need to update the above sample in two key ways: