AISummary is implemented as the Next.js route /patientPromptEnum/page.tsx. It renders physician-facing patient summaries by: (1) resolving URL params (siteId, patientId, userId, promptEnum, chatUri, etc.), (2) fetching Prompt and Category metadata via Refine’s DataProvider (which calls Azure Functions → Cosmos DB), (3) rendering one or more PatientPrompt widgets, and (4) posting the physician’s typed question or selected prompt(s) to the Python Flask AI service at chatUri (/v1/chat_patient). The page accumulates and scores ChatAnswer parts (via chatAudits) and displays content with categories and citations.

graph TD
subgraph Browser[Physician Browser]
U[/URL with siteId, patientId, userId, chatUri/]
Page[Next.js /patientPromptEnum]
end
U --> Page
subgraph UI[Next.js App aicopilot]
Page --> Params[useAISummaryParams]
Page --> Init[useAISummaryInit]
Page --> PR[usePromptResources]
Page --> Sub[useSubPrompts]
Page --> Ans[useAnswer]
Page --> Chips[PromptChips]
Page --> Modal[AISummaryModal]
Page --> Renderer[PromptRenderer]
Renderer --> PP[PatientPrompt]
end
subgraph AF[Azure Functions TypeScript]
Clients[/GET v1/clients/]:::fn
Cats[/GET v1/categories/]:::fn
Prompts[/GET v1/prompts/]:::fn
Audits[/GET v1/chatAudits/]:::fn
DB[(Cosmos DB)]:::db
Clients --> DB
Cats --> DB
Prompts --> DB
Audits --> DB
end
subgraph PY[AI Chat Service Flask]
Chat[/POST v1/chat_patient/]:::fn
LC[LangChain + LLM]
Chat --> LC
end
PR -- load prompts --> Prompts
Page -- clientType lookup --> Clients
Page -- category options --> Cats
PP -- POST question/context/data --> Chat
Chat -- ChatAnswer(parts,context) --> PP
PP -- poll score --> Audits
classDef fn fill:#eef,stroke:#335,stroke-width:1px;
classDef db fill:#efe,stroke:#393,stroke-width:1px;graph TB
A[/patientPromptEnum/page.tsx/]
subgraph Hooks
P1[useAISummaryParams]
P2[useAISummaryInit]
P3[usePromptResources]
P4[useSubPrompts]
P5[useAnswer]
P6[usePromptVisibility]
P7[usePromptTitle]
P8[useModalFocus]
P9[useAuth]
P10[useOrigin]
end
subgraph UI Components
C1[PromptRenderer]
C2[PatientPrompt]
C3[AISummaryModal]
C4[ModalTriggerTextField]
C5[PromptChips]
C6[BackToTopFab]
C7[Loading]
C8[SettingsMenu]
end
A --> P1 & P2 & P3 & P4 & P5 & P6 & P7 & P8 & P9 & P10
A --> C1 & C3 & C4 & C5 & C6 & C7 & C8
C1 --> C2
C2 -.->|onPromptFetching/onPromptFetched| A
A -.->|postMessage start/stopLoading| ParentAppsequenceDiagram
autonumber
actor Physician
participant Parent as Parent App (optional)
participant UI as Next.js /patientPromptEnum
participant Hooks as use* hooks
participant DP as Refine DataProvider
participant AF as Azure Functions (v1/clients, v1/categories, v1/prompts, v1/chatAudits)
participant DB as Cosmos DB
participant PP as PatientPrompt
participant PY as Flask aichat (/v1/chat_patient)
participant LLM as LangChain + LLM
Physician->>UI: Open with ?siteId&patientId&userId&chatUri[=&promptEnum|&interactive]
UI->>Hooks: useAISummaryParams() parse flags
Note over UI: If no auth and not localhost → notFound()
UI->>DP: list('clients', filter: siteId)
DP->>AF: GET /v1/clients?siteId=...
AF->>DB: query Clients
DB-->>AF: Client[] (clientType)
AF-->>DP: Client[]
UI->>DP: list('categories', filter: clientType)
DP->>AF: GET /v1/categories?clientType=...
AF->>DB: query Categories
DB-->>AF: Category[]
AF-->>DP: Category[]
UI->>DP: list('prompts', filters: siteId + promptEnum or text)
DP->>AF: GET /v1/prompts?siteId=...&promptEnum=...|text=...
AF->>DB: query Prompts
DB-->>AF: Prompt[]
AF-->>DP: Prompt[]
DP-->>UI: Prompt[]
UI->>UI: useSubPrompts split '<new_question>' → prompts[]
UI->>PP: Render for each prompt (or free-text)
loop For each question
PP->>Parent: postMessage startLoading{module}
PP->>PY: POST { chatUri=/v1/chat_patient }\n{ siteID, patientID, userID, queryText, model, category[], context{}, data }
PY->>LLM: Build chain and fetch
LLM-->>PY: ChatAnswer(parts[], context)
PY-->>PP: 200 { answerId, content, parts[], context }
PP->>AF: Poll GET /v1/chatAudits?answerId=… (if guardrails on)
AF->>DB: Find ChatAudit
DB-->>AF: score, valid, reason
AF-->>PP: ChatAudit
alt score < 7 or !valid and retries < 3
PP->>PY: retry POST /v1/chat_patient
else success
PP->>UI: onPromptFetched(prompt, parts)
PP->>Parent: postMessage stopLoading{module}
end
end
Physician->>UI: Adjust model/categories/guardrails, then ask again
UI->>PP: Re-run with updated settings