The Next.js PromptJob page receives a chatUri prop that points to the Flask aichat route /v1/chat_patient. When a physician submits a question, the page kicks off an agentic PromptJob. The aichat service (via LangChain) calls Azure Functions to fetch each patient's last 12 hours of data from Cosmos DB, plans a podcast script per patient, then uses Gemini for text-to-speech to synthesize audio. The Functions layer stores podcast metadata in Cosmos DB and the audio in Blob Storage. The page renders a fully interactive list grouped by patient—showing status/progress, transcript excerpts, and an HTML5 audio player—refreshing as items complete until the job is done.
flowchart TB
subgraph UI[Next.js PromptJob Page]
Q[Physician Question]
L[List: Patient Podcasts]
P[HTML5 Audio Player]
F[Filters: last 12h]
S[Status/Progress Chips]
end
subgraph AICHAT[Python Flask aichat]
R[v1/chat_patient]
AG[LangChain Agent]
T1[Tool:getPatientEvents for 12h]
T2[Tool:planAndDraftScript]
T3[Tool:geminiTTS]
T4[Tool:savePodcastItem]
end
subgraph AFNS[Azure Functions-TypeScript]
FN1[/GET Patients/]
FN2[/GET Events?since=12h/]
FN3[/POST PromptJobs/]
FN4[/GET PromptJobs/:id/]
FN5[/PUT PodcastItems/]
end
subgraph DATA[Azure Resources]
COSMOS[(Cosmos DB)]
BLOB[[Blob Storage: audio.mp3]]
end
Q -->|POST prompt, window:12h via chatUri| R
R --> AG
AG -->|invoke| T1 --> FN2 --> COSMOS
AG -->|invoke| T2
AG -->|invoke| T3 --> BLOB
AG -->|invoke| T4 --> FN5 --> COSMOS
R -->|create job| FN3 --> COSMOS
UI <--> |poll| FN4
UI --> |fetch list| FN5
COSMOS -->|podcast metadata| UI
BLOB -->|audioUrl - SAS | UI
UI -.-> L
UI -.-> P
UI -.-> S
UI -.-> F
flowchart LR
subgraph Page[PromptJobPage.tsx]
HOOK0[useInit chatUri, window=12h]
HOOK1[usePromptJob-chatUri]
HOOK2[usePromptJobStatus-jobId]
HOOK3[usePatientPodcasts-window=12h]
HOOK4[useAudioPlayer]
HOOK5[useFilters]
BAR[FiltersBar]
LIST[PodcastList]
CARD[PatientPodcastCard]
PLAYER[PodcastPlayer]
EMPTY[EmptyState]
end
HOOK0 --> HOOK1
HOOK1 --> HOOK2
HOOK2 --> HOOK3
HOOK5 --> HOOK3
HOOK3 --> LIST
LIST --> CARD
CARD --> PLAYER
PLAYER --> HOOK4
BAR --> HOOK5
subgraph DataLayer
API[apiClient using native fetch]
STATE[useState/useEffect]
end
HOOK1 -->|POST /v1/chat_patient| API
HOOK2 -->|GET /functions/promptjobs/:id| API
HOOK3 -->|GET /functions/podcastItems?since=12h| API
API --> STATEsequenceDiagram
autonumber
actor Physician
participant Page as Next.js PromptJobPage
participant AIC as Flask aichat (/v1/chat_patient)
participant AG as LangChain Agent
participant AF as Azure Functions (TS)
participant DB as Cosmos DB
participant BL as Blob Storage
participant GM as Gemini TTS
Physician->>Page: Type question & Submit
Note over Page: chatUri prop points to /v1/chat_patient
Page->>AIC: POST /v1/chat_patient { prompt, window: last12h }
AIC->>AF: POST /promptjobs (create)
AF->>DB: Insert PromptJob { jobId, status: queued }
AF-->>AIC: 201 { jobId }
AIC->>AG: Start agent(jobId, window=12h)
AG->>AF: GET /events?since=12h&groupBy=patient
AF->>DB: Query events (labs, vitals, notes)
DB-->>AF: Events by patient
AF-->>AG: Events payload
AG->>AG: Plan outline + draft script per patient
AG->>GM: Synthesize(script, voice)
GM-->>AG: Audio bytes
AG->>BL: PUT /blobs/podcasts/{jobId}/{patientId}.mp3
BL-->>AG: audioUrl (SAS)
AG->>AF: PUT /podcastItems { jobId, patientId, audioUrl, transcript, status: ready }
AF->>DB: Upsert PodcastItem
loop Progress updates
AG->>AF: PUT /promptjobs/{jobId}/progress { completed, total }
AF->>DB: Update PromptJob
end
AG-->>AIC: Completed
AIC->>AF: PUT /promptjobs/{jobId} { status: complete }
AF->>DB: Mark complete
par UI refresh cycle
Page->>AF: GET /promptjobs/{jobId}
AF-->>Page: { status, progress }
Page->>AF: GET /podcastItems?jobId&since=12h
AF->>DB: Query podcast items
DB-->>AF: Results (with audioUrl)
AF-->>Page: PodcastItems[]
end
Physician->>Page: Click play on a patient
Page->>BL: GET audioUrl (SAS)
BL-->>Page: MP3 stream
Page-->>Physician: Audio playback + transcript + controls