RimSense: Enabling Touch-based Interaction on Eyeglass Rim Using Piezoelectric Sensors

Smart eyewear’s interaction mode has attracted significant research attention. While most commercial devices have adopted touch panels situated on the temple front of eyeglasses for interaction, this paper identifies a drawback stemming from the unparalleled plane between the touch panel and the display, which disrupts the direct mapping between gestures and the manipulated objects on display. Therefore, this paper proposes RimSense, a proof-of-concept design for smart eyewear, to introduce an alternative realm for interaction - touch gestures on eyewear rim. RimSense leverages piezoelectric (PZT) transducers to convert the eyeglass rim into a touch-sensitive surface. When users touch the rim, the alteration in the eyeglass’s structural signal manifests its effect into a channel frequency response (CFR). This allows RimSense to recognize the executed touch gestures based on the collected CFR patterns. Technically, we employ a buffered chirp as the probe signal to fulfil the sensing granularity and noise resistance requirements. Additionally, we present a deep learning-based gesture recognition framework tailored for fine-grained time sequence prediction and further integrated with a Finite-State Machine (FSM) algorithm for event-level prediction to suit the interaction experience for gestures of varying durations. We implement a functional eyewear prototype with two commercial PZT transducers. RimSense can recognize eight touch gestures on the eyeglass rim and estimate gesture durations simultaneously, allowing gestures of varying lengths to serve as distinct inputs. We evaluate the performance of RimSense on 30 subjects and show that it can sense eight gestures and an additional negative class with an F1-score of 0.95 and a relative duration estimation error of 11%. We further make the system work in real-time and conduct a user study on 14 subjects to assess the practicability of RimSense through interactions with two demo applications. The user study demonstrates RimSense’s good performance, high usability, learnability and enjoyability. Additionally, we conduct interviews with the subjects, and their comments provide valuable insight for future eyewear design. Download PDF

EyeGesener: Eye Gesture Listener for Smart Glasses Interaction Using Acoustic Sensing

The smart glasses market has witnessed significant growth in recent years. The interaction of commercial smart glasses mostly relies on the hand, which is unsuitable for scenarios where both hands are occupied. In this paper, we propose EyeGesener, an eye gesture listener for smart glasses interaction using acoustic sensing. To mitigate the Midas touch problem, we meticulously design eye gestures for interaction as two intentional consecutive saccades in a specific direction without visual dwell. The proposed system is a glass-mounted acoustic sensing system with two pairs of commercial speakers and microphones to sense eye gestures. To capture the subtle movements of the eyelid and surrounding skin induced by eye gestures, we design an Orthogonal Frequency Division Multiplexing (OFDM)-based channel impulse response (CIR) estimation schema that allows two speakers to transmit at the same time and in the same frequency band without collision. We implement eye gesture filtering and adversarial-based eye gesture recognition to identify eye gestures for interaction, filtering out daily eye movements. To address the differences in eye size and facial structure among different users, we employ adversarial training to achieve user-independent eye gesture recognition. We evaluate the performance of our system through experiments on data collected from 16 subjects. The experimental result shows that our system can recognize eight eye gestures with an average F1-score of 0.93, and the false alarm rate of our system is 0.03. We develop an interactive real-time audio-video player based on EyeGesener and then conduct a user study. The result demonstrates the high usability of the proposed system. Download PDF

AcousAF: Acoustic Sensing-Based Atrial Fibrillation Detection System for Mobile Phones

Atrial fbrillation (AF) is characterized by irregular electrical impulses originating in the atria, which can lead to severe complications and even death. Due to the intermittent nature of the AF, early and timely monitoring of AF is critical for patients to prevent further exacerbation of the condition. Although ambulatory ECG Holter monitors provide accurate monitoring, the high cost of these devices hinders their wider adoption. Current mobile-based AF detection systems offer a portable solution. However, these systems have various applicability issues, such as being easily affected by environmental factors and requiring significant user effort. To overcome the above limitations, we present AcousAF, a novel AF detection system based on acoustic sensors of smartphones. Particularly, we explore the potential of pulse wave acquisition from the wrist using smartphone speakers and microphones. In addition, we propose a well-designed framework comprised of pulse wave probing, pulse wave extraction, and AF detection to ensure accurate and reliable AF detection. We collect data from 20 participants utilizing our custom data collection application on the smartphone. Extensive experimental results demonstrate the high performance of our system, with 92.8% accuracy, 86.9% precision, 87.4% recall, and 87.1% F1 Score. Download PDF

BLEAR: Practical Wireless Earphone Tracking under BLE protocol

Motion tracking is an important aspect of human-computer interaction (HCI) and recent research focuses on motion tracking using earphones’ embedded acoustic sensors. However, these solutions can only be deployed on wired earphones, while most of the commercial earphones are wireless ones. This limitation arises because wireless earphones utilize the Bluetooth Low Energy (BLE) protocol for handling audio data, which blocks the usage of existing acoustic sensing solutions. Firstly, the low sampling rate of BLE prevents the system from processing high-frequency ultrasounds. However, the sensing signal for earphones must be ultrasonic to prevent disturbance to the user. Secondly, BLE employs an audio compression process that is applied with different compression rates with different bandwidths. This will break the structure of wideband signals usually used for acoustic sensing. To overcome these challenges, we present BLEAR, the first earphone-tracking system compatible with the BLE audio recording protocol. To let BLE earphones receive ultrasounds, BLEAR utilizes a specially designed bandwidth conversion scheme that uses a mask signal to trigger a non-linear effect that converts high-frequency components to low-frequency ones, thereby overcoming the low audio sampling rate restriction of BLE. Additionally, by strategically designing beacon signals to align with BLE’s subband compression pattern, BLEAR mitigates the influence of audio compression and achieves accurate wireless earphone tracking. We implement a wireless earphone prototype for BLEAR and conduct extensive experiments involving 8 subjects to demonstrate its feasibility. The experimental results show that BLEAR achieves a mean distance tracking error of 3.37 cm, an angle tracking error of 5.3 degrees, and an accuracy of 97.14% in recognizing 7 common user activities. This work not only introduces a BLE-compatible earphone tracking solution but also establishes a foundation for broader BLE device tracking applications. Download PDF

EHTrack: Earphone-Based Head Tracking via Only Acoustic Signals

Head tracking is a technique that allows for the measurement and analysis of human focus and attention, thus enhancing the experience of human–computer interaction (HCI). Nevertheless, current solutions relying on vision and motion sensors exhibit limitations in accuracy, user-friendliness, and compatibility with the majority of commercial off-the-shelf (COTS) devices. To overcome these limitations, we present EHTrack, an earphone-based system that achieves head tracking exclusively through acoustic signals. EHTrack employs acoustic sensing to measure the movement of a pair of earphones, subsequently enabling precise head tracking. In particular, a pair of speakersgenerates a periodically fluctuating sound field, which the user’s two earphones detect. By assessing the distance and angle alterations between the earphones and speakers, we propose a model to determine the user’s head movement and orientation. Our evaluation results indicate a high degree of accuracy in both head movement tracking, with an average tracking error of 2.98 cm, and head orientation tracking, with an average error of 1.83◦. Furthermore, in a deployed exhibition scenario, we attained an accuracy of 89.2% in estimating the user’s focus direction. Download PDF

Noncontact Respiration Detection Leveraging Music and Broadcast Signals

We design a respiration detection system which derives the respiration rate by continuously estimates the channel impulse response (CIR) using music signals played by smart devices such as smartspeakers. Extensive experiments are conducted to demonstrate the feasibility of our system. The result shows that our system can achieve high respiration detection accuracy with the mean error of less than 0.5 BPM when different music signals are used. Download PDF

Acoustic-based Upper Facial Action Recognition for Smart Eyewear

We propose a novel acoustic-based upper facial action (UFA) recognition system that serves as a hands-free interaction mechanism for smart eyewear. The proposed system is a glass-mounted acoustic sensing system with several pairs of commercial speakers and microphones to sense UFAs. We evaluate the performance of our system through experiments on data collected from 26 subjects. The experimental result shows that our system can recognize the six UFAs with an average F1-score of 0.92. Download PDF

Acoustic Strength-based Motion Tracking

Accurate device motion tracking enables many applications like Virtual Reality (VR) and Augmented Reality (AR). To make these applications available in people’s daily life, low-cost acoustic-based motion tracking methods are proposed. However, existing acoustic-based methods are all based on distance estimation. These methods measure the distance between a speaker and a microphone. With a speaker or microphone array, it can get multiple estimated distances and further achieve multidimensional motion tracking. The weakness of distance-based motion tracking methods is that they need large array size to get accurate results. Some systems even require an array larger than 1 m. This weakness limits the adoption of existing solutions in a single device like a smart speaker. To solve this problem, we propose Acoustic Strength-based Angle Tracking (ASAT) System and further implement a motion tracking system based on ASAT. ASAT achieves angle tracking by creating a periodically changing sound field. A device with a microphone will sense the periodically changing sound strength in the sound field. When the device moves, the period of received sound strength will change. Thus we can derive the angle change and achieve angle tracking. The ASAT-based system can obtain the localization accuracy as 5 cm when the distance between the speaker and the microphone is in the range of 3 m. Download PDF

Single-Frequency Ultrasound-Based Respiration Rate Estimation with Smartphones

Respiration monitoring is helpful in disease prevention and diagnosis. Traditional respiration monitoring requires users to wear devices on their bodies, which is inconvenient for them. In this paper, we aim to design a noncontact respiration rate detection system utilizing off-the-shelf smartphones. We utilize the single-frequency ultrasound as the media to detect the respiration activity. By analyzing the ultrasound signals received by the built-in microphone sensor in a smartphone, our system can derive the respiration rate of the user. The advantage of our method is that the transmitted signal is easy to generate and the signal analysis is simple, which has lower power consumption and thus is suitable for long-term monitoring in daily life. The experimental result shows that our system can achieve accurate respiration rate estimation under various scenarios. Download PDF