Michael McGrath is Senior Technologist in Intel's Digital Health Group. He is also a co-Principal Investigator in the TRIL Centre focusing on development technologies to support independent living research. Michael McGrath joined Intel in 1999. Terrance (Terry) J. Dishongh, Ph.D., is currently a Senior Principal Engineer in Intel's Digital Health Group. The authors can be contacted at [email protected] and [email protected], respectively. Courtesy Intel Corporation. All rights reserved.
Biomedical and clinical research and development projects have a growing need for technology solutions that are highly flexible, extensible, easy to use, and provide a comprehensive range of capabilities. The availability of such resources allows researchers to focus on the "R" instead of the "D" in "R&D" during the lifecycle of their research project. The technology development overhead that many biomedical researchers must address can detrimentally impact the pace and scope of the research projects. This problem has been compounded in recent years due to the growing interest in moving from laboratory to data collection and observation in home environments. A variety of applications have been reported in the literature including physiological monitoring [1-3], physical rehabilitation , activities of daily living (ADL) monitoring [5, 6], falls detection [7, 8], cognitive function , and social engagement [10, 11]. These applications typically comprise hardware and software components interconnected to form customized applications to address the needs of a specific research hypothesis.
Many biomedical applications utilize wireless sensing capabilities in the form of body-worn sensing or non-contact sensing to acquire data of interest and to transmit them to an aggregation device. Fusion of both body-worn and ambient sensors has been reported to improve accuracy of inference based on activity or behavior identification [12, 13]. Typically, multiple parameters are of interest including physiological [14, 15], kinematics [16, 17], ambient [18, 19], and environmental measurements .
To address the need for multisensing capabilities while minimizing the complexity of the hardware and software components, platform-based approaches have emerged where the sensing capability of the sensors can be modified by changing the sensing element, normally in the form of a daughterboard that connects to a common baseboard that provides the computational and communications capabilities. Chen et al  report a sensor node platform for wireless biomedical sensing. They utilize a flexible expansion connector that supports additional daughterboards, including ECG with TinyOS firmware. DuboisFerrière et al  describe the TinyNode, which features two types of add-on boards. The Standard Extension Board (SEB) includes footprints for two optional sensors: a relative humidity and temperature sensor and a photodiode light sensor. The MamaBoard features a variety of external communications including, LAN, WLAN, and GPRS options with support for data storage via an SD card. The platform also features an XE1205 from Semtech with a reported four- to eight-fold communications range performance improvement over MICA2 and TelosB sensor nodes. Nokia has reported a wearable sensor platform called a Nokia Wrist–Attached Sensor Platform* (NWSP) that is based on a highly flexible Field Programmable Gate Array (FPGA) . The platform features an accelerometer, a gyroscope, and magnetometer sensing capabilities in a wristwatch-like form-factor.
For a body-worn sensor platform, size is a key consideration. IMECs Human++ research program has developed a highly miniaturized and autonomous sensor system for body sensor network applications . The platform combines wireless ultra-low power communications, 3D integration, MEMS energy scavenging techniques, and low-power design techniques. Two-channel Electroencephalography (EEG), two-channel Electrooculography (EOG), and one-channel Electromyography (EMG) for sleep-monitoring applications and wireless ECG monitoring have been demonstrated .
From a software perspective, platforms for wireless sensor networks have been reported in the literature [14, 25, 26]. Walker et al describe a Java and Java Agent based Development (JADE) platform; it is designed to provide sensor network integration and application development capabilities . Other Java platforms include SunSPOT from Sun Microsystems .
The Arduino open-source electronics prototyping platform has become popular especially for undergraduate teaching purposes. A number of research applications have been reported, including e-Textiles, that are based on the integration of accelerometers for motion capture and human motion capture in combination with tactile feedback in response to specific postures in real-time [29, 30].
A number of options exist for high-level application program development with associated benefits and difficulties. Typically there is a choice between traditional syntactically-oriented software languages such as C/C++, Java, Python, etc., bespoke SDKs; e.g., Eclipse, and graphical development environments (GDE). Both the C-type programming and to a lesser extent the SDK approaches are complex and time-consuming. Firstly, they often result in applications that are harder to debug, integrate, and evolve. Secondly, they restrict researcher access to the development process. Graphical programming approaches can deliver a common interface to the sensor/network with which all users will be able to interact. These interfaces generally take the form of drag-and-drop development environments where functional blocks are selected from a palette of tools and connected on a drawing area to describe data flow through a system or application. The environment can cater for users with widely varying levels of technical expertise by enabling design and development work to be carried out at several levels of abstraction. Using a GDE, a developer can build a working system graphically by dragging and dropping blocks onto a diagram (resembling a data-flow diagram). Typically, each block features a number of input and output pins, with connections made between blocks by clicking and dragging from a pin on one block to a suitable pin on another block.
LabVIEW from National Instruments, has been popular for many years, particularly for applications that required data acquisition via an analog-to-digital converter (ADC) or device actuation via a digital-to-analog converter (DAC). LabVIEW provides a GDE with a drag and drop capability for functional blocks wired together to form a functional application. It has been used for a variety of biomedical-type applications including interfacing pulse oximeters , ECG signal analysis , tri-axial accelerometers and galvanic skin response (GSR) , and gyroscopes for gait analysis . Other graphical programming environments that have been developed for biomedical applications include Scicos: these can be utilized for signal processing, systems control, and for studying physical and biological systems . The BioEra environment provides a graphical block-based environment that can process dozens of channels simultaneously in real-time from various devices, including EEG and heart rate monitors . BrainBay is a bio- and neuro-feedback environment, again based on a block-type development environment designed to work with the OpenEEG hardware platform . Simulink from MathWorks provides an interactive graphical environment that has been applied to a variety of sensor simulation applications [38, 39].
To address the issues associated with technology development such as rapid prototyping, reuse, extensibility, distribution, etc., the TRIL Centre  has developed a modular, extensible, and reusable technology approach based on an open research platform concept called BioMOBIUS. The TRIL Centre is a collaboration between Intel Ireland, University College Dublin, Trinity College Dublin, and the National University of Ireland at Galway, with support from IDA Ireland. TRIL grounds its research in ethnography and clinical efficacy, with a common set of research tools based on BioMOBIUS, and takes the research out of the lab into homes of older people. BioMOBIUS enables TRIL researchers to carry out their research in an effective and efficient manner and to have the ability to share their research solutions in a logistically effective manner. A modular, abstracted approach reduces the learning curve and prompts rapid application prototyping and reuse, thereby lowering development costs and reducing the barriers to technology adoption. The software components of BioMOBIUS are made freely available to the biomedical research community by the TRIL Centre.
The BioMOBIUS research platform comprised of both low-level and high-level software development environments is fully integrated with the SHIMMER sensor platform [41, 42]. ("SHIMMER" is short for "Sensing Health with Intelligence, Modularity, Mobility and Experimental Reusability".) BioMOBIUS also supports a number of other third-party sensors and off-the-shelf devices. This integration of software and hardware provides a unique set of capabilities that enables developers, engineers, and researchers to rapidly develop research applications to investigate a variety of research hypothesis and to rapidly modify the applications based on the evolving needs of the researcher and end user. The key features of the platform to enable such capabilities are as follows:
- Ease of hardware integration.
- Software component reuse.
- Highly extensible both from a hardware and software perspective.
- High quality user interfaces (UI).
- Supports data acquisition rates appropriate for kinematic and physiological data capture.
- Provides real-time data processing and data presentation.
- Data persistence to file and database.
- Stability and reliability.
A typical BioMOBIUS application includes SHIMMER sensors, processing functionality, and a UI. The sensors monitor biomedical indicators such as gait stability, and the processing functionality converts the sensor data into meaningful information. The UI allows clinicians to view the information and adjust the application settings or it allows the home-based participant to view feedback information on performance; e.g., attention level measurement via GSR.