Recognising users’ value in understanding their sleep stages, Google has extended Nest Hub’s sleep-wake algorithms using Soli to distinguish between light, deep, and REM (rapid eye movement) sleep.
The gold standard for identifying sleep stages is polysomnography (PSG), which employs an array of wearable sensors to monitor a number of body functions during sleep, such as brain activity, heartbeat, respiration, eye movement, and motion. Trained sleep technologists can then interpret these signals to determine sleep stages.
To help people understand their sleep patterns, Nest Hub displays a hypnogram, plotting the user’s sleep stages over the course of a sleep session. In addition, potential sound disturbances during sleep will now include “Other sounds” in the timeline to separate the user’s coughs and snores from other sound disturbances detected from sources in the room outside of the calibrated sleeping area.
Image: Google AI
The team employed a design that is generally similar to Nest Hub’s original sleep detection algorithm: sliding windows of raw radar samples are processed to produce spectrogram features, and these are continuously fed into a TensorFlow Lite model. The key difference is that this new model was trained to predict sleep stages rather than simple sleep-wake status and thus required new data and a more sophisticated training process.
“To develop our model, we used publicly available data from the Sleep Heart Health Study (SHHS), and Multi-ethnic Study of Atherosclerosis (MESA) studies with over 10,000 sessions of raw PSG sensor data with corresponding sleep staging ground-truth labels, from the National Sleep Research Resource,” as per the blog.
One can understand the entire work of the model here.
The study concludes: Based on privacy-preserving radar and audio signals, these improved sleep staging and audio sensing features on Nest Hub provide deeper insights that we hope will help users translate their nighttime wellness into actionable improvements for their overall wellbeing.