Writing AI-generated SOAP notes (Subjective, Objective, Assessment, Plan) back to Epic via FHIR DocumentReference is the integration pattern that determines whether ambient clinical documentation actually reaches clinicians in their workflow. Production-grade write-back requires: SMART on FHIR launch context to operate within the encounter view; FHIR DocumentReference resource construction with appropriate metadata (encounter linkage, document type, status, content type, security labels); proper integration with Epic’s Hyperspace and Hyperdrive UI surfaces; signature workflow that preserves the clinician’s review-and-sign authority; App Orchard certification for production deployment; and audit logging across the full read-context-write cycle. The same pattern applies to Cerner-Oracle, Athena, and Allscripts with EHR-specific integration variations. Most ambient documentation deployments that fail clinical adoption fail at this layer — the model is good, the architecture is sound, but the integration depth into the EHR’s specific workflow is shallow, so clinicians don’t use it. The integration depth is the differentiator.
The FHIR write-back of AI-generated clinical notes is the most-asked-about integration pattern on Taction’s ambient documentation engagements. The model layer and the architecture layer are increasingly commoditized; the integration layer with Epic specifically — and the other major EHRs by extension — is where engineering depth still matters and where most teams underdeliver.
This guide is the engineering reference Taction Software® uses on Epic-integrated ambient documentation engagements.
The Eight Engineering Components
The reference architecture spans eight required components for production Epic write-back.
Component 1 — SMART on FHIR Launch Context
The AI feature launches inside Epic with the patient and encounter context already established. The clinician doesn’t have to context-switch out of Epic to access the AI; the AI runs inside Hyperspace or Hyperdrive in an embedded context.
Implementation. SMART on FHIR launch flow with EHR-launch-context. The AI feature receives the encounter ID, patient ID, and clinician ID via the launch token. The token also carries the OAuth scopes the feature is authorized to use.
Component 2 — FHIR API Read of Encounter Context
The AI feature reads the relevant encounter context via FHIR — patient demographics, current encounter, problem list, medications, allergies, recent encounters, recent lab and imaging results.
Implementation. FHIR R4 GET requests against the Epic FHIR endpoint with the OAuth bearer token. Read-only scopes typical for ambient documentation; the AI doesn’t need write access to most resources.
Component 3 — Audio Capture and ASR Processing
Audio captured during the encounter flows to the BAA-covered ASR service for transcription. The transcript flows to the LLM for note generation.
Implementation. End-to-end encryption from capture device. ASR service has BAA coverage. Transcript stored under the institution’s PHI compliance posture.
Component 4 — LLM Note Generation in SOAP Format
The LLM produces the structured SOAP note in Epic-compatible format. The structure follows the institution’s documentation standards; format is rendered as markdown or HTML for write-back.
Implementation. Prompt engineering specifies Epic-aligned SOAP format. Note structure includes Subjective (HPI, ROS), Objective (vital signs, exam findings, lab/imaging review), Assessment (diagnosis, problem list update), Plan (medications, orders, follow-up). The note draft is rendered for clinician review before write-back.
Component 5 — FHIR DocumentReference Construction
The AI-generated note is constructed as a FHIR DocumentReference resource for write-back to Epic. The resource includes the encounter linkage, document type code (LOINC for the relevant note type), status (preliminary or final based on clinician review), content type, and security labels.
Implementation.
{
“resourceType”: “DocumentReference”,
“status”: “preliminary”,
“type”: {
“coding”: [{
“system”: “http://loinc.org”,
“code”: “11506-3”,
“display”: “Subsequent evaluation note”
}]
},
“subject”: { “reference”: “Patient/{patient-id}” },
“context”: {
“encounter”: [{ “reference”: “Encounter/{encounter-id}” }]
},
“content”: [{
“attachment”: {
“contentType”: “text/markdown”,
“data”: “{base64-encoded note content}”
}
}],
“securityLabel”: […]
}
The DocumentReference is submitted via FHIR POST. Epic accepts the resource and renders the note in the encounter view.
Component 6 — Clinician Review-and-Sign Workflow
The note arrives in Epic in preliminary status. The clinician reviews, edits if needed, and signs. The signature changes the note status from preliminary to final and triggers downstream workflows (billing, downstream documentation review, distribution to other clinicians).
Implementation. The AI feature’s UX renders the draft note alongside the original transcript and any cited evidence. The clinician can edit inline. Acceptance triggers the FHIR DocumentReference update from preliminary to final via FHIR PATCH or PUT. Override patterns (accept, edit, reject) are first-class log events.
Component 7 — App Orchard Certification
For production deployment in Epic, App Orchard certification is the operational gate. The certification process validates the feature’s integration patterns, security posture, and clinical workflow design against Epic’s standards.
Timeline. App Orchard certification typically takes 8–16 weeks. The certification work runs in parallel with the production deployment scope; certification is achieved before broad institutional rollout.
Engineering implications. Certification has specific requirements around OAuth flows, FHIR scope usage, error handling, accessibility, and clinical documentation patterns. Vendors with prior App Orchard certifications compress the timeline materially because the patterns are reusable across submissions.
Component 8 — Audit Logging
Every read of encounter context, every audio capture, every ASR operation, every LLM inference, every note write-back, every clinician override is logged as a first-class event with the schema described in our HIPAA audit logging reference.
The audit log captures the full read-context-write cycle so an auditor can reconstruct what context the AI used, what note it produced, what the clinician did with it, and what the final note submitted to Epic was.
Cross-EHR Variations
The pattern applies to the four major US EHRs with EHR-specific variations.
Epic
The most-deployed target. SMART on FHIR launch context is well-supported. FHIR R4 endpoints are mature. App Orchard certification is the operational gate for production. Hyperdrive (the modern Epic UI) supports embedded AI features cleanly; Hyperspace (the legacy desktop UI) requires additional engineering for embedded UX.
Cerner-Oracle Health
SMART on FHIR launch context is supported. Code Console is the marketplace and certification path. Integration patterns are similar to Epic with platform-specific UI considerations.
Athena
athenaOne marketplace is the certification path. SMART on FHIR support has matured in 2025. Integration is operationally simpler than Epic for many use cases; the marketplace path is well-established.
Allscripts
Allscripts Developer Program (ADP) is the certification path. Integration patterns vary across Allscripts product lines (Sunrise, TouchWorks, Professional). Specialty engagements often address one specific Allscripts product rather than treating Allscripts as a single integration target.
For institutions running multiple EHRs (common at multi-hospital health systems), the architecture supports multi-EHR integration with shared FHIR write-back logic and EHR-specific adapters at the integration boundary. The shared-infrastructure economics improve substantially when the AI feature can deploy across the institution’s full EHR portfolio.
Common Failures and Resolutions
Five patterns Taction’s engagements catch in production deployments.
Failure 1 — Standalone AI app without EHR integration. The AI runs in a separate web application; clinicians have to switch out of Epic to use it. Adoption is below 20%. Resolution: SMART on FHIR launch context as default scope from week 1.
Failure 2 — FHIR DocumentReference without proper encounter linkage. The note writes to FHIR but doesn’t link to the correct encounter, so it doesn’t appear in the encounter view. Resolution: encounter linkage is part of the DocumentReference construction; testing against real Epic environments validates the linkage works.
Failure 3 — Final-status notes that bypass clinician signature. The AI writes the note as “final” before the clinician reviews. The clinician’s review-and-sign authority is undermined; the audit trail is incomplete. Resolution: notes write as preliminary; clinician signature changes status to final; the workflow preserves clinician authority.
Failure 4 — App Orchard certification deferred to post-launch. The team builds the AI feature, ships internal pilot, and discovers at general-rollout time that App Orchard certification is required and takes 8–16 weeks. Resolution: certification work runs in parallel with the production build, not after it.
Failure 5 — Audit log gaps on the integration boundary. The application audit log captures user activity; the EHR audit log captures clinician interactions; the integration boundary doesn’t have its own log. Investigation requests can’t reconstruct what happened at the integration. Resolution: audit logging at the integration boundary is first-class scope.
Pricing and Engagement Structure
| Engagement | Duration | Price Range | Scope |
| EHR Integration Discovery | 4 weeks | $45,000 | EHR-specific integration scoping, certification pathway planning, FHIR endpoint validation, integration architecture |
| FHIR Write-Back MVP | 8–10 weeks | $95,000–$130,000 | SMART on FHIR launch context, FHIR DocumentReference write-back, signature workflow, audit logging |
| App Orchard Certification | 8–16 weeks parallel | $50,000–$120,000 | Certification preparation, submission, response to Epic feedback, pre-production validation |
| Production Deployment | 12–24 weeks | $150,000–$280,000 | Full multi-specialty deployment, multi-EHR support where applicable, operational support |
Total engagement cost for end-to-end Epic-integrated ambient documentation typically runs $400,000–$700,000 across the discovery, MVP, certification, and production phases. Multi-EHR deployments add proportional scope per additional EHR.
Closing
FHIR write-back of AI-generated clinical notes to Epic and the other major EHRs is the integration pattern that determines whether ambient documentation actually reaches clinical workflow. The engineering depth is substantial; the architecture is well-defined; the certification path has known timelines.
Buyers who scope against this depth produce deployments that survive clinical adoption review. Buyers who treat the integration as “we’ll figure that out later” produce demos that don’t translate to production.
If you are scoping an Epic-integrated ambient documentation deployment, book a 60-minute scoping call. Taction Software has shipped 200+ EHR integrations across Epic, Cerner-Oracle, Athena, and Allscripts since 2013, with active App Orchard, Cerner Code Console, athenaOne marketplace, and Allscripts ADP relationships. We have shipped zero HIPAA findings on shipped software and active BAA paper trails with every major AI provider. Our healthcare engineering team builds production EHR-integrated ambient documentation with the architecture described above as default scope. Our verified case studies cover the production deployments behind these patterns. For the engineering scope behind the engagement, see our healthcare software development practice and our hospital and health-system practice for the operational context. For the FHIR API patterns this work depends on, see our healthcare data integration practice and our broader FHIR API development work. For an estimate against your specific use case, see the healthcare engineering cost calculator. For deeper context, see our broader generative AI healthcare applications work.
