From smartwatches to health trackers, wearable units have turn out to be ubiquitous, frequently capturing the wealthy move of information about our lives. They document our coronary heart fee, depend our steps, and observe our health and sleep. This flood of knowledge holds nice potential for personalised well being and wellness. Nonetheless, whereas it’s straightforward to see what our our bodies are doing (e.g. 150 bpm coronary heart fee), the important thing context of why (“lively uphill” vs. “annoying public talking occasions”) is usually lacking. This hole between uncooked sensor information and its real-world that means was a serious barrier to attaining the total potential of those units.
The principle problem lies within the rarity of enormous datasets that mix sensor recording with wealthy descriptive textual content. Manually annotating hundreds of thousands of hours of information is extraordinarily costly and time consuming. To resolve this and “converse” wearable information by itself, we’d like a mannequin that may study the advanced connections between sensor indicators and human language immediately from the info.
“Sensorlm: Studying the Language of Wearable Sensors” introduces Sensorlm, a household of sensor language basis fashions that fill this hole. Pre-trained with unprecedented 59.7 million hours of multimodal sensor information from over 103,000 people, Sensorlm learns to interpret and generate delicate, human-readable explanations from high-dimensional wearable information, setting new and up-to-date states in sensor information understanding.


