System architecture + anthropology = Better sensor algorithms

Mobile devices use sensors to measure more than µT of magnetic field and m/s2 of acceleration. Sensor data also reveal user activities, postures, environments, and even attention. Sensors are not merely metrological instruments linking sensor algorithms and hardware. To enable user context awareness, advanced sensor algorithms must be well-matched with mobile system architectures on the one hand, and simultaneously understand how users behave on the other.

August 7, 2012 — In 2007, Apple rolled out the iPhone and started a revolution in smart mobile devices. The original iPhone included an accelerometer to sense how a user is holding the device, and orient the image on the display in landscape or portrait accordingly. Today, smartphones from all makers include one or more inertial motion sensors (accelerometers, magnetometers, and gyroscopes). However, application developers and system designers are just beginning to take advantage of their sensing capabilities, including combining sensor inputs — sensor fusion — with advanced algorithms.

Early sensor applications: Motion interfaces

To date, many sensor applications track user gestures, and use the results as another input to the user interface. This allowed users to change screen orientation by rotating the device, erase an email by shaking the phone, or double tap to send an incoming call to voicemail, for example.

The most significant advancement enabled by a gesture-based user interface so far allows users to navigate available applications by tilting their phones to step through menu selections. This ability, coupled with advances in image processing and speech synthesis, now allows vision-impaired users to browse supermarket aisles using their smartphones [1]. For the average smartphone user, besides controlling screen orientation, motion interfaces have largely gone unappreciated and unnoticed.

Figure. A model for sensor algorithms. SOURCE: Sensor Platforms.

User context

Introducing any new user interface requires the user to learn a new set of behaviors. For the vision-impaired, learning and adopting motion interfaces for their smartphones opens new possibilities [2]. For average users, however, using gestures to control their devices is at best a passing novelty, since they do not see enough benefits to justify learning something new. To be truly successful with the general public, a new generation of smart devices must adapt to their users and not demand that users adapt to them. This takes a combination of sensors, intelligent algorithms, and mobile computing resources.

Sensors in mobile devices capture a lot more than gross user movements, like gestures. Accelerometers and gyroscopes in smart phones record muscle tremors and biomechanical resonances from their users. Magnetometers detect magnetic fields emissions from nearby power lines and engines. Such information is generally discarded in motion interfaces but it does contain user contexts; that is, information about the user that can improve interaction.

For example, muscle tension and resonance can identify when and how a user is holding the device. Calculating that the user is holding the phone at his side, the smartphone can turn off the backlight for its display, sensing it is currently unused. On the other hand, sensor signals can indicate when the user is reading the display, and so keep the backlight on.

Again, motion dynamics can identify if a user is standing, sitting, walking or running [3], and so control functions like refreshing GPS or WiFi fixes. Unless, that is, subtle signatures in the magnetic field suggests the device is in a vehicle, which may start to move. Detecting these characteristics requires more than just clean metrological measurements.

To derive user contexts, algorithm developers first collect data containing the specific context, and then create a set of algorithms to recognize it reliably. The data are best collected from subjects who are acting naturally and unaware of the context of interest. Some algorithms can develop an understanding of a user on a personal level, and thus improve reliability by catering to the user’s unique characteristics.

In this article, I use the term “anthropology” as a broad umbrella to include studies of the characteristics of human physical traits, human behavior, and the variations among humans. These inputs are critical today: in designing the appropriate settings to collect the algorithm training information; for determining if the data collected are sufficiently diverse for the algorithm to work for an average smartphone user; and in understanding which part of the algorithm could benefit from user-specific adaptation.

Low-power system architecture

Besides sensors and intelligent algorithms, designers must consider mobile computing resources, which are always limited by battery life. Context-detection algorithms monitor user activities by running continuously in the background, creating a nonstop demand for power whether the user is interacting with the phone or not.

Of course, cell phone designers are familiar with circuits that must remain continuously active. A phone has to be in constant connection with the cellular network to receive calls and text messages. Over many phone generations, designers have focused on minimizing the standby current, the electricity consumed by cellular connectivity when the phone is otherwise completely idle. To do this, the standby mode of a cell phone consists of repeated cycles of sleep and wakeup. The phone wakes up to check for the presence of a call or text message. In the absence of either, the phone re-enters sleep mode. The design increases the efficiency of any hardware needed to check for calls.

The same considerations are applicable to sensor algorithm design. They should be power-aware and adjust the processing requirements based on the amount of meaningful information contained in each sample. For example, the gyroscope used to track the angular rate of device motion requires significantly more power than the accelerometer and the magnetometer combined. The magnetometer and the accelerometer in combination form an electronic compass, which measures the angular position of the device. Because the first derivative of angular position is angular rate, an intelligent algorithm can decide that, when the device is turning slowly in a uniform magnetic field, it is possible to derive angular rate by using a high-bandwidth electronic compass as a virtual gyroscope. Doing so avoids the higher power use of the gyroscope, as well as the computation needed to process gyroscope samples. As the rotation rate approaches the limit of the electronic compass’s tracking ability, the algorithm can switch on the gyroscope and transition to its angular rate measurement seamlessly.

Sensor hardware agnosticism

Sensor component manufacturers have argued that the best-performing sensor algorithms need to be customized to the proprietary characteristics of each sensor component [4] Such arguments treat mobile sensing applications as mere measurement instruments, and thus ignore the impact that system design, target use cases, and user variances can have on the performance and usefulness of sensor algorithms.

While targeted optimization is possible with any algorithm, its impact falls far short of the higher-level architectural concerns discussed here. Given the nature of sensor physics, no single sensor manufacturer can offer the breadth of products that satisfy every price/performance objective for every mobile device in a manufacturer’s product portfolio. Rather than catering to specific configuration components, good sensor algorithms must be derived from sound usage data and be architected for low power, as well as work with a wide selection of sensor components to meet a device manufacturer’s requirements.

Conclusion

Applications for sensors in mobile devices are still evolving. Instead of treating sensors like a set of measuring instruments, new context-aware devices are using sensor information to learn about their users and adapt to improve interactions. Sensor algorithms for these devices must be founded on power-conscious architecture, and a sound understanding of the behavior of target users.

References

1. Vladimir Kulyukin, “Toward Comprehensive Smartphone Shopping Solutions for Blind and Visually Impaired Individuals,” Computer Science Assistive Technology Laboratory, Department of Computer Science, Utah State University, Logan, UT, Rehab and Community Care Magazine, 2010.

2. H. Shen and J. Coughlan, “Towards A Real-Time System for Finding and Reading Signs for Visually Impaired Users,” 13th International Conference on Computers Helping People with Special Needs (ICCHP ’12), Linz, Austria, July 2012.

3. James Steele, “Understanding Virtual Sensors: From Sensor Fusion to Context-Aware Applications,” Electronic Design Magazine, July 10, 2012, http://electronicdesign.com/article/embedded/understanding-virtual-sensors-sensor-fusion-contextaware-applications-74157.

4. As discussed in “You make MEMS. Should you make sensor fusion software?” Meredith Courtemanche, blog entry, Solid State Technology Magazine, May 25, 2012, www.electroiq.com/blogs/electroiq_blog/2012/05/you-make-mems-should-you-make-sensor-fusion-software.html.

Ian Chen is executive vice president at Sensor Platforms Inc. Contact him at [email protected].

Visit the MEMS Channel of Solid State Technology, and subscribe to our MEMS Direct e-newsletter!

POST A COMMENT

Easily post a comment below using your Linkedin, Twitter, Google or Facebook account. Comments won't automatically be posted to your social media accounts unless you select to share.