SMART on FHIR is the open standard for embedding third-party applications — including AI features — inside Epic and other major EHRs. Production-grade embedding requires six engineering components: SMART on FHIR launch context that operates inside Epic’s Hyperspace or Hyperdrive UI, OAuth 2.0 authorization with appropriate scopes, FHIR R4 read of patient and encounter context, FHIR R4 write-back of structured outputs (DocumentReference, Observation, ServiceRequest, QuestionnaireResponse, etc.), App Orchard certification for production deployment, and audit logging across the full launch-context-write cycle. The architecture is well-defined; the engineering depth is substantial; the certification timelines are predictable. Most healthcare AI features that fail clinical adoption fail at this layer — the AI is good but lives in a separate tool clinicians don’t switch to. EHR-embedded UX is non-negotiable for production clinical AI in 2026.
The integration depth into Epic determines whether a clinical AI feature actually reaches clinical workflow. Standalone AI applications in separate web tools have adoption rates well under 30% in nearly every documented case; in-EHR embedded features clear 70–90% adoption when the architecture is right. The integration depth is the differentiator.
This guide is the engineering reference Taction Software® uses on Epic SMART on FHIR engagements — for healthtech founders building AI features that have to deploy into hospital EHR environments, hospital innovation teams shipping internal AI, and enterprise health systems building proprietary AI capability across their Epic footprint.
What SMART on FHIR Is and Why It Matters
SMART on FHIR (Substitutable Medical Applications, Reusable Technologies, on Fast Healthcare Interoperability Resources) is the open standard for healthcare app integration. It defines:
Launch context. How an external application is launched from within an EHR with patient and encounter context already established. The clinician doesn’t have to re-establish context; the AI feature receives the context via a launch token.
OAuth 2.0 authorization. How the application authenticates with the EHR’s FHIR server and what scopes (resource permissions) it has been granted.
FHIR R4 API. How the application reads patient data, encounter context, observations, conditions, medications, and other clinical resources, and writes back structured outputs.
Why it matters. SMART on FHIR is the operational pattern that lets clinical AI features run inside Epic without Epic-specific custom integration work for each feature. The AI feature is built once against the SMART on FHIR standard; it deploys across Epic, Cerner-Oracle, Athena, and Allscripts with EHR-specific configuration but no fundamental rewriting.
For Epic specifically, SMART on FHIR is the dominant integration path. Epic’s App Orchard marketplace requires SMART on FHIR for third-party apps; Epic’s own internal apps use the same patterns where they integrate with non-Epic systems.
The Eight Engineering Components
The reference architecture for production Epic SMART on FHIR integration.
Component 1 — App Registration and OAuth Setup
The AI feature is registered with Epic as a SMART on FHIR app. Registration includes the launch URL, redirect URLs, scopes the app requires, and the integration patterns it uses. OAuth 2.0 client credentials are issued; the app uses them at launch time to obtain access tokens.
Implementation. The registration is per-Epic-instance. App Orchard apps are registered centrally; institutional apps are registered with the institution’s specific Epic environment.
Component 2 — Launch Context Handling
When the clinician launches the AI feature from inside Epic, Epic redirects to the app’s launch URL with a launch parameter and the FHIR endpoint URL. The app exchanges these for an access token and the patient/encounter context.
Implementation. The OAuth dance happens in the app’s authentication layer. The token includes the patient ID, encounter ID, and clinician ID; the app reads these to establish context for the user’s session.
Component 3 — FHIR R4 Read Operations
The app reads patient context via FHIR R4 GET requests. Common reads:
- Patient/{id} — patient demographics
- Encounter/{id} — current encounter details
- Condition?patient={id} — problem list
- MedicationRequest?patient={id}&status=active — active medications
- AllergyIntolerance?patient={id} — allergies
- Observation?patient={id}&category=vital-signs&_count=10 — recent vitals
- Observation?patient={id}&category=laboratory&_count=20 — recent labs
- DocumentReference?patient={id}&type=… — recent clinical notes
The reads happen in parallel where possible; the app assembles the patient context for the AI inference.
Component 4 — Inference and Output Generation
The patient context flows to the inference gateway with the use-case-specific prompt. The LLM produces the structured output — a clinical note, a coding suggestion, a triage disposition, a prior-auth letter, etc.
Implementation. The inference gateway pattern handles BAA paper trail, audit logging, structured output validation, and any redaction patterns required.
Component 5 — Clinician Review UX
The AI’s output renders in the app’s UI inside Epic. The clinician reviews, edits, accepts, or rejects. The UX preserves clinician authority — the AI proposes, the clinician disposes.
Implementation. The app’s UI loads in an embedded webview or iframe inside Epic Hyperspace or Hyperdrive. Modern Epic deployments use Hyperdrive (the web-based UI); legacy desktop deployments use Hyperspace with embedded browser components. The UX adapts to the host environment.
Component 6 — FHIR R4 Write-Back
When the clinician accepts (or accepts-with-edit) the AI’s output, the app writes the result back to Epic via FHIR. Common write-backs:
- DocumentReference — clinical notes, AI-generated documentation
- Observation — structured findings, scores, or assessments
- ServiceRequest — orders or recommendations
- QuestionnaireResponse — completed structured forms
- MedicationRequest — prescriptions (with appropriate clinician confirmation)
The write-back uses POST or PUT against the Epic FHIR endpoint. The structured resources include encounter linkage, security labels, and metadata that drive downstream Epic workflow.
Component 7 — App Orchard Certification
For institutional production deployment, App Orchard certification is the operational gate. The certification process validates the app’s integration patterns, security posture, error handling, accessibility, clinical workflow design, and documentation.
Timeline. Typically 8–16 weeks for first-time submissions; 4–8 weeks for repeat submissions from vendors with prior certifications. The certification work runs in parallel with the production build.
Engineering implications. Certification has specific requirements around:
- OAuth 2.0 implementation correctness
- FHIR scope usage (least-privilege patterns)
- Error handling and retry logic
- Accessibility (WCAG compliance)
- Clinical documentation quality
- Privacy and security posture
- App resilience and failure modes
Component 8 — Audit Logging Across the Full Cycle
Every launch-context retrieval, every FHIR read, every inference, every clinician override, every FHIR write-back is logged as a first-class event. The audit trail allows reconstruction of the full clinician interaction with the AI feature.
The Common Failure Modes
Five patterns that produce SMART on FHIR integrations that fail production.
Failure 1 — Standalone Web App Without SMART on FHIR
The team builds the AI feature as a standalone web application accessible from a separate URL. Clinicians have to switch out of Epic to use it. Adoption rate sits below 25%. Resolution: SMART on FHIR launch context as default scope from week 1.
Failure 2 — FHIR Reads Without Proper Scope Management
The app requests excessive FHIR scopes (“just give me read access to everything”). The institution’s security team rejects the deployment. Resolution: scope analysis at the start — exactly which resources does the AI need to read, scope per least-privilege.
Failure 3 — DocumentReference Write-Back That Doesn’t Render in Encounter View
The AI writes a clinical note to FHIR but the encounter linkage is wrong, the document type code is wrong, or the security labels prevent the encounter view from displaying it. The clinician can’t find the note. Resolution: encounter linkage and metadata are validated against real Epic environments before launch.
Failure 4 — App Orchard Certification Deferred to Post-Launch
The team builds the AI feature, ships internal pilot, and discovers at general-rollout time that App Orchard certification is required and takes 8–16 weeks. The rollout stalls. Resolution: certification work runs in parallel with the production build, not after it.
Failure 5 — Inadequate Error Handling on FHIR Failures
The Epic FHIR endpoint occasionally returns errors (rate limits, transient failures, scope mismatches). The app handles errors poorly — clinician sees a stack trace, generic error message, or feature failure with no recovery path. Resolution: production-grade error handling with retry logic, graceful degradation, and clinician-readable error messages.
Multi-EHR Architecture
For institutions running Epic alongside other EHRs (common at multi-hospital health systems where acquisitions have brought in different platforms), the architecture supports multi-EHR deployment with EHR-specific adapters.
Shared core. Inference gateway, audit log, eval harness, model layer, prompt engineering, RAG corpus — all shared across EHRs.
EHR-specific adapters. Launch context handler, FHIR client (with EHR-specific endpoint URLs and authentication), write-back logic (with EHR-specific resource conventions). Adapters for Epic, Cerner-Oracle, Athena, and Allscripts.
Shared UX with EHR-specific theming. The clinician UX is consistent across EHRs but adapts to the host environment’s visual conventions.
The shared-infrastructure economics improve substantially when the AI feature deploys across the institution’s full EHR portfolio. The marginal cost of adding a second EHR is much lower than the cost of building the feature from scratch on the second EHR.
Pricing and Engagement Structure
| Engagement | Duration | Price Range | Scope |
| EHR Integration Discovery | 4 weeks | $45,000 | Epic-specific scoping, FHIR endpoint validation, scope analysis, certification pathway planning |
| SMART on FHIR MVP | 8–10 weeks | $95,000–$130,000 | Production-grade SMART launch, FHIR read/write, audit logging, clinician override UX |
| App Orchard Certification | 8–16 weeks parallel | $50,000–$120,000 | Certification preparation, submission, response to Epic feedback |
| Production Deployment | 12–24 weeks | $150,000–$280,000 | Full multi-specialty deployment, multi-EHR support where applicable, operational support |
Total Epic-integrated AI feature engagement typically runs $400,000–$700,000 across discovery, MVP, certification, and production phases.
Closing
SMART on FHIR is the operational pattern for clinical AI in Epic in 2026. The engineering depth is substantial; the architecture is well-defined; the certification timelines are predictable. Buyers who scope against this depth produce deployments that survive clinical adoption review. Buyers who treat the integration as “we’ll figure that out later” produce demos that don’t translate to production.
If you are scoping an Epic-integrated clinical AI feature, book a 60-minute scoping call. Taction Software has shipped 200+ EHR integrations across Epic, Cerner-Oracle, Athena, and Allscripts since 2013, with active App Orchard, Cerner Code Console, athenaOne marketplace, and Allscripts ADP relationships. Zero HIPAA findings on shipped software, and active BAA paper trails with every major AI provider. Our healthcare engineering team builds production SMART on FHIR integrations with the architecture described above as default scope. Our verified case studies cover the production deployments behind these patterns. For the engineering scope behind the engagement, see our healthcare software development practice and our hospital and health-system practice for the operational context. For the FHIR API patterns this work depends on, see our healthcare data integration practice and our broader FHIR API development work. For an estimate against your specific use case, see the healthcare engineering cost calculator. For deeper context, see our broader generative AI healthcare applications work.
