Effective RPM deployment requires the application of industry best practices with respect to information technology, data center operations, enterprise service delivery, and robust security and privacy measures. The technical challenges in healthcare are not insurmountable; rather, they can be solved by using well-known solutions and design patterns. What is required, however, is deep health-care domain expertise, a keen sense of customer requirements, and an understanding of the context in which the system will be used. In order to fully realize the potential of RPM, we must arrive at the right combination of technology and process.
Patient Acuity and Mode of Health-care Delivery
Patient acuity and the concomitant mode of health-care delivery are arguably the most important determinants of RPM requirements. Patient acuity determines the level of monitoring and likelihood that intervention will be required, the frequency of data collection, the criticality of reference ranges and alerting mechanisms, and the relative intolerance for data latency. The mode of health-care delivery -- whether it be a wellness coach, a visiting nurse, community outreach, an assisted living facility, hospital discharge monitoring to avoid readmission, population management, or patient-empowered healthcare -- tends to be matched to patient acuity and provider service level agreements. Contact with the patient, and the relative need to process the clinical findings, will therefore range from occasional, monthly, quarterly, daily, hourly, or perhaps even more frequent when a sudden decline in health is detected. The duration of RPM deployments could be measured in weeks in the case of hospital discharge monitoring, months in the case of high-risk pregnancies, or years when monitoring elderly patients with co-morbidities.
Data Latency and Modes of Transmission
As discussed earlier, tolerance for data latency is largely determined by patient acuity of the target population. Relatively healthy patients, coupled with wellness coaches, can tolerate high data latency with summary reports over periods as much as a few months at a time. High-acuity patients require more frequent monitoring, with data latency approaching near real-time. High data latency can readily be accommodated by scheduled, file-oriented, and store-and-forward processes to perform data integration. In contrast, low data latency requires end-to-end integration performed via Web services every few minutes, where the incremental transmission is closer to a single patient session, containing the latest raw measurements and assessment responses, rather than a complete trend analysis of the past period. Alerts and notifications can be generated via real-time, event-driven triggers, whereas batch operations and monthly summary reports can be scheduled to occur during off-hour processing.
Volume and Quality of Data
Remote patient monitoring involves significantly more data than what is typically anticipated for an EHR, such as a clinical encounter. Patients may be instructed to take vital sign measurements multiple times per day in addition to responding to various assessment questions. One of the chief areas of ongoing investigation is the optimal level of summarization of the PHMR. Different clinicians will likely want a full range of options, from daily, monthly, or quarterly to a filtered summary, depending upon patient acuity and the mode of health-care delivery.
Once the report design is optimized, additional adaptations may be necessary to the recipient system in order to fully leverage the additional rich data types, process alerts, and triage patients based on clinical findings. In particular, it is unlikely that recipient systems are prepared to work with patient-specific reference ranges, threshold violations, assessment questionnaires, or weighted scores for industry standard protocols. Some systems are unprepared to process datetime stamps on individual measurements, because of the prior exclusive focus on clinical encounters in office settings. In other words, while a given office medical system might record the date of the office visit, it rarely records the time of an individual measurement. While the concept of an RPM "patient session" can be likened to an office visit, the recipient medical system is unprepared to process the sheer volume of RPM sessions and measurements.
As clinical systems and processes evolve to process data from RPM, the need for more sophisticated methods of patient triage, alert, and notification will also be required. For example, a large number of normal readings from a moderately-sized patient population will quickly outpace the most efficient of care-manager organizations, if workflows require the manual acknowledgement of all readings, rather than triage based on out-of-range measurements. Conversely, if an organization becomes overly dependent upon the direct integration of RPM data without also developing adequate means for systems monitoring, an undetected outage or transmission failure might inadvertently create a false impression that patient readings are within normal limits. It is critical, therefore, to develop adequate systems monitoring and failsafe methods. For example, reports should be appropriately annotated with synchronized datetime stamps, indicating both the time of report generation and the time of the last data transmission. All points along the end-to-end data flow should be instrumented and monitored for effective operation and patient safety.
Threshold violations, especially life-threatening ones, need to trigger specific workflows that are customizable by practice, by co-morbidities, by target populations, and by individual patients. We have identified the need to define tiers of thresholds to separately drive patient and clinician workflows and modes of intervention, ranging from patient education, to clinician referral, to emergency hospital admission. There is also the need to capture both the trigger event and the clinical intervention as part of analytics. For example, a patient's oxygen saturation falls dangerously low, which triggers an alert and results in some form of patient intervention -- whether a phone call, an SMS text message, a video conference with a clinician, or a house call from a visiting nurse. The clinical intervention may result in a change in protocol, a lab order, a medication change, or hospitalization. Each of these events needs to be associated with a standard measure of outcomes in order to support analytics for evidence-based medicine and drive further improvements to healthcare.
Clinician Workflow and Reimbursement Model
Another important consideration in the integration of RPM information is clinician workflow and the reimbursement model. Some jurisdictions require that a clinician manually review and accept each and every measurement prior to importing the data into the institutional EHR. This follows standard clinical practice of signing off when reviewing external lab results, yet again, the volume of data is fundamentally different when considering RPM.
Some institutions extend the system boundary of the EHR to encompass any automated data capture but draw the line at information that is patient-reported or otherwise manually entered, such as a PHR containing diet and exercise journal entries. Data that are grandfathered in as automatic data capture might not require the manual approval step, whereas patient-reported data may be reviewed but perhaps not incorporated into the institution's legal record. Annotating the data stream with the source and method of reporting helps to account for these differences in policies.
Accommodations are required for both the level of summary and raw information in a given report. Streamlined mechanisms are required to process messages through the clinical inbox, along with careful consideration as to what level of clinical staff might be able to process what level of data on behalf of the doctor, in order to potentially offload this manual step.
Further, the clinical reimbursement model is frequently called into question with respect to RPM. Some reimbursement models attempt to equate RPM with an office visit, while others only reimburse when the patient establishes and maintains good tolerance of pre-established thresholds. Each of these considerations will have an impact on the rate of adoption of RPM, especially when combined with additional processing overhead on the part of the physicians to periodically review the results.
Trans-Border Data Flow Considerations
Careful attention to detail will be required for any deployment in which integration is planned across borders of state/provinces, regional, or national boundaries. Privacy and data protection laws are rapidly changing worldwide, with significant penalties for mishandling of data and breach of privacy. Advanced workflow, transformation, and routing engines will be required to comply with local data protection regulations and policies. Special consideration is due when determining where to locate a primary or alternative data center hosting patient data, since a number of countries require that protected health information (PHI) not cross national boundaries. To manage and track patient consent and negotiate appropriate data use, business associate and data controller/supplier agreements are all essential, regardless of whether or not information crosses any recognized governmental boundaries.
Identifying Systems of Record
Identifying a single system of record (i.e., a single authoritative source for each and every data element) is essential to any successful integration project and frequently overlooked in applications such as RPM. It is typical for even small organizations to already have multiple systems in place for purposes of chronic disease management, population management, a primary EHR data repository, a separate system for lab results, etc. The addition of telehealth data likely represents the addition of one or more systems to an already complex and disorderly mix.
A key area of concern is patient and clinician demographics. While clinical data might be easily segregated between different systems of record, it is highly likely that every system maintains its own copy of demographic data. Considerations must be given both from a systems and a workflow perspective to demographics synchronization, import, and ongoing maintenance. The older systems likely do not have a method to disable manual edits to demographics, yet one must ensure that clinicians are always working from an authoritative source of patient and clinician demographics and contact information.
In an advanced integration deployment, the demographics system of record updates the recipient systems with the latest information, including translation of identity to the recipient system, via an entity identity service (EIS). Provisions are made to disable manual edits in the recipient systems, or at least ensure that processing detects, logs as an exception, and overrides any unauthorized changes.
In mixed environments with both legacy and newer systems, a complex scheme of automated demographics integration along with a carefully designed business process is required. A central system can be configured to synchronize demographics to each of the recipient systems. When manual edits in each system cannot be disabled, they must be controlled via business process, training, and careful oversight to ensure that demographics' changes are only entered into the central system. Common identity mismatch errors tend to require a small staff to resolve and maintain on an ongoing basis.
As health-care systems integration becomes more complex, encompassing multiple end points and service providers, each with their own independent systems of record, it becomes paramount to employ an industry-strength EIS to accurately address the identity match problem. Industry leaders in EIS leverage advanced stochastic algorithms for matching against multiple demographics attributes to disambiguate identity. The OMG/HL7 Healthcare Specifications Services Project (HSSP) is working to define standard Web service interfaces for common capabilities like EIS, such that different commercial services may be deployed without requiring a change to the implemented interface.
There are a number of deployment considerations when establishing a remote patient-monitoring solution. The AHIC use cases are foundational to defining routine health-care interactions and processes, required data elements, and terminology constraints to standardize the exchange. It is also important to distinguish the areas of local variation and future development, such as those identified in the RPM sequence and interaction diagrams (Figures 1 and 2, respectively). It is important to control the level of customization required for any given solution to something that delivers real value to the customer, both in the short-term and in the reasonable future, yet is practical enough to represent low maintenance over time. While anything is technically feasible, it is not practical for a business to develop a system to be all things to all people. It is critical to establish up front some of the key business drivers, including patient acuity, target mode of health-care delivery, and the relative tolerance for data latency. From this baseline, customization mechanisms can be established to allow for local variation, leveraging codeless configuration changes to metadata, rather than requiring a code recompile and system overhaul for each deployment.
Remote patient monitoring represents a critical intersection of health-care information integration, embodying the need for health-care informatics standards, careful consideration of workflow and system deployment tradeoffs, and direct engagement with the patients and clinicians who work with the system. The use of HL7 CDA helps to accelerate adoption of health-care standards by properly constraining the rich schema and vocabulary of the RIM, and lowering the barriers of entry through incremental evolution. The use of SOA design principles enables us to respond to changes in business drivers and adapt to the complexity of health-care integration end points. Terminology standards drive computable information which in turn enables advanced analytics, clinical research, and transformational health-care delivery. We are actively working with the SDOs in pursuit of future enhancements to the existing health-care informatics standards.
 Bass, Christy and Lee, J. Michael. 2002. "Building a Business Case for EAI." eAI Journal, January 2002, pp. 18-20.
 Booz Allen Hamilton. 2005. "Canada Health Infoway. Pan-Canadian Electronic Health Record: Projected Costs and Benefits." March 2005. At www.infoway-inforoute.ca.
 Continua Health Alliance. 2009. "Continua Design Guidelines." V1.0, June 2009. At www.continuaalliance.org.
 Dolin R.H., Alschuler L, Boyer S, Beebe C, Behlen FM, Biron PV, Shabo A, (Editors). 2005. "HL7 Clinical Document Architecture, Release 2.0. ANSI-approved HL7 Standard." May 2005. Ann Arbor, Michigan: Health Level Seven, Inc. At www.hl7.org.
 Healthcare Information Technology Standards Panel (HITSP). 2008. "Consultations and Transfers of Care Interoperability Specification." HITSP/IS09. Released for Implementation. 20081218 V1.0. At www.hitsp.org.
 Healthcare Information Technology Standards Panel (HITSP). 2008. "Remote Monitoring Interoperability Specification." HITSP/IS77. Released for Implementation. 20081218 V1.0. At www.hitsp.org.
 Healthcare Information Technology Standards Panel (HITSP). 2009. "Remote Monitoring Observation Document Component." HITSP/C74. Released for Implementation. 20090708 V1.1. At www.hitsp.org.
 Health Level Seven (HL7). 2009. "Implementation Guide for CDA Release 2: CDA Framework for Questionnaire Assessments (Universal Realm) Draft Standard for Trial Use (DSTU) Release 1.0." April 2009. At www.hl7.org.
 Health Level Seven (HL7). 2007. "Implementation Guide: CDA Release 2 – Continuity of Care Document (CCD)." April 01, 2007. At www.hl7.org.
 Health Level Seven (HL7). 2009. "Implementation Guide for CDA Release 2.0 Personal Health Monitoring Report (PHMR) Draft Standard for Trial Use (DSTU) Release 1." At www.hl7.org.
 OMG SOA Consortium, CIO Magazine. 2008. "OMG SOA Consortium and CIO Magazine Announce Winners of SOA Case Study Competition." At www.soa-consortium.org.
 Schindler, Esther. 2008. CIO Magazine. "Service-Oriented Architecture Pays Off for Synovus Financial." September 30, 2008. At www.cio.com.
 U.S. Dept. of Health and Human Services (HHS), Office of the National Coordinator for Health Information Technology (ONC). 2008. "Long Term Care–Assessments. AHIC Extension/Gap." December 31, 2008. At www.hitsp.org.
Special thanks to Dr. Robert H. Dolin, co-chair of the HL7 Structured Documents Committee, who assisted us in developing our initial CDA models.
This article and more on similar subjects may be found in the Intel Technology Journal, September 2009 Edition, "Enabling Healthcare in the Home" (http://intel.com/technology/itj).