Back in 2010, Gary Wolf, then the editor of Wired magazine, delivered a TED talk in Cannes called “the quantified self.” It was about what he termed a “new fad” among tech enthusiasts. These early adopters were using gadgets to monitor everything from their physiological data to their mood and even the number of nappies their children used.
Wolf acknowledged that these people were outliers—tech geeks fascinated by data—but their behavior has since permeated mainstream culture.
From the smartwatches that track our steps and heart rate, to the fitness bands that log sleep patterns and calories burned, these gadgets are now ubiquitous. Their popularity is emblematic of a modern obsession with quantification—the idea that if something isn’t logged, it doesn’t count.
At least half the people in any given room are likely wearing a device, such as a fitness tracker, that quantifies some aspect of their lives. Wearables are being adopted at a pace reminiscent of the mobile phone boom of the late 2000s.
However, the quantified self movement still grapples with an important question: Can wearable devices truly measure what they claim to?
Along with my colleagues Maximus Baldwin, Alison Keogh, Brian Caulfield, and Rob Argent, I recently published an umbrella review (a systematic review of systematic reviews) examining the scientific literature on whether consumer wearable devices can accurately measure metrics like heart rate, aerobic capacity, energy expenditure, sleep, and step count.
At a surface level, our results were quite positive. Accepting some error, wearable devices can measure heart rate with an error rate of plus or minus 3 percent, depending on factors like skin tone, exercise intensity, and activity type. They can also accurately measure heart rate variability and show good sensitivity and specificity for detecting arrhythmia, a problem with the rate of a person’s heartbeat.
Additionally, they can accurately estimate what’s known as cardiorespiratory fitness, which is how the circulatory and respiratory systems supply oxygen to the muscles during physical activity. This can be quantified by something called VO2Max, which is a measure of how much oxygen your body uses while exercising.
The ability of wearables to accurately measure this is better when those predictions are generated during exercise (rather than at rest). In the realm of physical activity, wearables generally underestimate step counts by about 9 percent.
Challenging endeavour
However, discrepancies were larger for energy expenditure (the number of calories you burn when exercising) with error margins ranging from minus-21.27 percent to 14.76 percent, depending on the device used and the activity undertaken.
Results weren’t much better for sleep. Wearables tend to overestimate total sleep time and sleep efficiency, typically by more than 10 percent. They also tend to underestimate sleep onset latency (a lag in getting to sleep) and wakefulness after sleep onset. Errors ranged from 12 percent to 180 percent, compared to the gold standard measurements used in sleep studies, known as polysomnography.
The upshot is that, despite the promising capabilities of wearables, we found conducting and synthesizing research in this field to be very challenging. One hurdle we encountered was the inconsistent methodologies employed by different research groups when validating a given device.
This lack of standardization leads to conflicting results and makes it difficult to draw definitive conclusions about a device’s accuracy. A classic example from our research: one study might assess heart rate accuracy during high-intensity interval training, while another focuses on sedentary activities, leading to discrepancies that can’t be easily reconciled.
Other issues include varying sample sizes, participant demographics, and experimental conditions—all of which add layers of complexity to the interpretation of our findings.
What does it mean for me?
Perhaps most importantly, the rapid pace at which new wearable devices are released exacerbates these issues. With most companies following a yearly release cycle, we and other researchers find it challenging to keep up. The timeline for planning a study, obtaining ethical approval, recruiting and testing participants, analyzing results, and publishing can often exceed 12 months.
By the time a study is published, the device under investigation is likely to already be obsolete, replaced by a newer model with potentially different specifications and performance characteristics. This is demonstrated by our finding that less than 5 percent of the consumer wearables that have been released to date have been validated for the range of physiological signals they purport to measure.
What do our results mean for you? As wearable technologies continue to permeate various facets of health and lifestyle, it is important to approach manufacturers’ claims with a healthy dose of skepticism. Gaps in research, inconsistent methodologies, and the rapid pace of new device releases underscore the need for a more formalized and standardized approach to the validation of devices.
The goal here would be to foster collaborative synergies between formal certification bodies, academic research consortia, popular media influencers, and the industry so that we can augment the depth and reach of wearable technology evaluation.
Efforts are already underway to establish a collaborative network that can foster a richer, multifaceted dialogue that resonates with a broad spectrum of stakeholders—ensuring that wearables are not just innovative gadgets but reliable tools for health and wellness.
Cailbhe Doherty, assistant professor in the School of Public Health, Physiotherapy and Sports Science, University College Dublin. This article is republished from The Conversation under a Creative Commons license. Read the original article.
https://arstechnica.com/?p=2044255