Episodes

  • ISMAR 2024 Do you read me? (E)motion Legibility of Virtual Reality Character Representations
    Feb 7 2025

    K. Brandstätter, B. J. Congdon and A. Steed, "Do you read me? (E)motion Legibility of Virtual Reality Character Representations," 2024 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Bellevue, WA, USA, 2024, pp. 299-308, doi: 10.1109/ISMAR62088.2024.00044.

    We compared the body movements of five virtual reality (VR) avatar representations in a user study (N=53) to ascertain how well these representations could convey body motions associated with different emotions: one head-and-hands representation using only tracking data, one upper-body representation using inverse kinematics (IK), and three full-body representations using IK, motioncapture, and the state-of-the-art deep-learning model AGRoL. Participants’ emotion detection accuracies were similar for the IK and AGRoL representations, highest for the full-body motion-capture representation and lowest for the head-and-hands representation. Our findings suggest that from the perspective of emotion expressivity, connected upper-body parts that provide visual continuity improve clarity, and that current techniques for algorithmically animating the lower-body are ineffective. In particular, the deep-learning technique studied did not produce more expressive results, suggesting the need for training data specifically made for social VR applications.

    https://ieeexplore.ieee.org/document/10765392

    Show More Show Less
    11 mins
  • A Conversation with Thad Starner on Mobile Sign Language Recognition
    Feb 7 2025

    The Oscar best picture winning movie CODA has helped introduce Deaf culture to many in the hearing community. The capital "D" in Deaf is used when referring to the Deaf culture, whereas small "d" deaf refers to the medical condition. In the Deaf community, sign language is used to communicate, and sign has a rich history in film, the arts, and education. Learning about the Deaf culture in the United States and the importance of American Sign Language in that culture has been key to choosing projects that are useful and usable for the Deaf.

    Show More Show Less
    15 mins
  • ISMAR 2024 Whirling Interface: Hand-based Motion Matching Selection for Small Target on XR Displays
    Feb 5 2025

    J. Lee et al., "Whirling Interface: Hand-based Motion Matching Selection for Small Target on XR Displays," 2024 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Bellevue, WA, USA, 2024, pp. 319-328, doi: 10.1109/ISMAR62088.2024.00046.

    We introduce “Whirling Interface,” a selection method for XR displays using bare-hand motion matching gestures as an input technique. We extend the motion matching input method, by introducing different input states to provide visual feedback and guidance to the users. Using the wrist joint as the primary input modality, our technique reduces user fatigue and improves performance while selecting small and distant targets. In a study with 16 participants, we compared the whirling interface with a standard ray casting method using hand gestures. The results demonstrate that the Whirling Interface consistently achieves high success rates, especially for distant targets, averaging 95.58% with a completion time of 5.58 seconds. Notably, it requires a smaller camera sensing field of view of only 21.45° horizontally and 24.7° vertically. Participants reported lower workloads on distant conditions and expressed a higher preference for the Whirling Interface in general. These findings suggest that the Whirling Interface could be a useful alternative input method for XR displays with a small camera sensing FOV or when interacting with small targets.

    https://ieeexplore.ieee.org/abstract/document/10765156

    Show More Show Less
    18 mins
  • ISMAR 2024 Perceived Empathy in Mixed Reality: Assessing the Impact of Empathic Agents’ Awareness of User Physiological States
    Feb 3 2025

    Z. Chang et al., "Perceived Empathy in Mixed Reality: Assessing the Impact of Empathic Agents’ Awareness of User Physiological States," 2024 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Bellevue, WA, USA, 2024, pp. 406-415, doi: 10.1109/ISMAR62088.2024.00055. https://doi.org/10.1109/ISMAR62088.2024.00055

    In human-agent interaction, establishing trust and a social bond with the agent is crucial to improving communication quality and performance in collaborative tasks. This paper investigates how a Mixed Reality Agent’s (MiRA) ability to acknowledge a user’s physiological state affects perceptions such as empathy, social connectedness, presence, and trust. In a within-subject study with 24 subjects, we varied the companion agent’s awareness during a mixed-reality first-person shooting game. Three agents provided feedback based on the users’ physiological states: (1) No Awareness Agent (NAA), which did not acknowledge the user’s physiological state; (2) Random Awareness Agent (RAA), offering feedback with varying accuracy; and (3) Accurate Awareness Agent (AAA), which provided consistently accurate feedback. Subjects reported higher scores on perceived empathy, social connectedness, presence, and trust with AAA compared to RAA and NAA. Interestingly, despite exceeding NAA in perception scores, RAA was the least favored as a companion. The findings and implications for the design of MiRA interfaces are discussed, along with the limitations of the study and directions for future work.

    https://ieeexplore.ieee.org/document/10765390

    Show More Show Less
    15 mins
  • CSCW 2024: Situating Empathy in HCI/CSCW: A Scoping Review
    Dec 2 2024

    Uğur Genç and Himanshu Verma. 2024. Situating Empathy in HCI/CSCW: A Scoping Review. Proc. ACM Hum.-Comput. Interact. 8, CSCW2, Article 513 (November 2024), 37 pages. https://doi.org/10.1145/3687052

    Empathy is considered a crucial construct within HCI and CSCW, yet our understanding of this complex concept remains fragmented and lacks consensus in existing research. In this scoping review of 121 articles from the ACM Digital Library, we synthesize the diverse perspectives on empathy and scrutinize its current conceptualization and operationalization. In particular, we examine the various interpretations and definitions of empathy, its applications, and the methodologies, findings, and trends in the field. Our analysis reveals a lack of consensus on the definitions and theoretical underpinnings of empathy, with interpretations ranging from understanding the experiences of others to an affective response to the other's situation. We observed that despite the variety of methods used to gauge empathy, the predominant approach remains self-assessed instruments, highlighting the lack of novel and rigorously established and validated measures and methods to capture the multifaceted manifestations of empathy. Furthermore, our analysis shows that previous studies have used a variety of approaches to elicit empathy, such as experiential methods and situational awareness. These approaches have demonstrated that shared stressful experiences promote community support and relief, while situational awareness promotes empathy through increased helping behavior. Finally, we discuss a) the potential and drawbacks of leveraging empathy to shape interactions and guide design practices, b) the need to find a balance between the collective focus of empathy and the (existing and dominant) focus on the individual, and c) the careful testing of empathic designs and technologies with real-world applications.

    https://dl.acm.org/doi/10.1145/3687052

    Show More Show Less
    40 mins
  • ICMI 2024 Exploring the Alteration and Masking of Everyday Noise Sounds using Auditory Augmented Reality
    Nov 18 2024

    Isna Alfi Bustoni, Mark McGill, and Stephen Anthony Brewster. 2024. Exploring the Alteration and Masking of Everyday Noise Sounds using Auditory Augmented Reality. In Proceedings of the 26th International Conference on Multimodal Interaction (ICMI '24). Association for Computing Machinery, New York, NY, USA, 154–163. https://doi.org/10.1145/3678957.3685750

    While noise-cancelling headphones can block out or mask environmental noise with digital sound, this costs the user situational awareness and information. With the advancement of acoustically transparent personal audio devices (e.g. headphones, open-ear audio frames), Auditory Augmented Reality (AAR), and real-time audio processing, it is feasible to preserve user situational awareness and relevant information whilst diminishing the perception of the noise. Through an online survey (n=124), this research explored users’ attitudes and preferred AAR strategy (keep the noise, make the noise more pleasant, obscure the noise, reduce the noise, remove the noise, and replace the noise) toward different types of noises from a range of categories (living beings, mechanical, and environmental) and varying degrees of relevance. It was discovered that respondents’ degrees of annoyance varied according to the kind of noise and its relevance to them. Additionally, respondents had a strong tendency to reduce irrelevant noise and retain more relevant noise. Based on our findings, we discuss how AAR can assist users in coping with noise whilst retaining relevant information through selectively suppressing or altering the noise, as appropriate.

    https://dl.acm.org/doi/10.1145/3678957.3685750

    Show More Show Less
    14 mins
  • ASSETS 2024: SoundHapticVR: Head-Based Spatial Haptic Feedback for Accessible Sounds in Virtual Reality for Deaf and Hard of Hearing Users
    Nov 9 2024

    Pratheep Kumar Chelladurai, Ziming Li, Maximilian Weber, Tae Oh, and Roshan L Peiris. 2024. SoundHapticVR: Head-Based Spatial Haptic Feedback for Accessible Sounds in Virtual Reality for Deaf and Hard of Hearing Users. In Proceedings of the 26th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS '24). Association for Computing Machinery, New York, NY, USA, Article 31, 1–17. https://doi.org/10.1145/3663548.3675639

    Virtual Reality (VR) systems use immersive spatial audio to convey critical information, but these audio cues are often inaccessible to Deaf or Hard-of-Hearing (DHH) individuals. To address this, we developed SoundHapticVR, a head-based haptic system that converts audio signals into haptic feedback using multi-channel acoustic haptic actuators. We evaluated SoundHapticVR through three studies: determining the maximum tactile frequency threshold on different head regions for DHH users, identifying the ideal number and arrangement of transducers for sound localization, and assessing participants’ ability to differentiate sound sources with haptic patterns. Findings indicate that tactile perception thresholds vary across head regions, necessitating consistent frequency equalization. Adding a front transducer significantly improved sound localization, and participants could correlate distinct haptic patterns with specific objects. Overall, this system has the potential to make VR applications more accessible to DHH users.

    https://dl.acm.org/doi/10.1145/3663548.3675639

    Show More Show Less
    13 mins
  • ASSETS 2024: SeaHare: An omidirectional electric wheelchair integrating independent, remote and shared control modalities
    Nov 9 2024

    Giulia Barbareschi, Ando Ryoichi, Midori Kawaguchi, Minato Takeda, and Kouta Minamizawa. 2024. SeaHare: An omidirectional electric wheelchair integrating independent, remote and shared control modalities. In Proceedings of the 26th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS '24). Association for Computing Machinery, New York, NY, USA, Article 9, 1–16. https://doi.org/10.1145/3663548.3675657

    Depending on one’s needs electric wheelchairs can feature different interfaces and driving paradigms with control handed to the user, a remote pilot, or shared. However, these systems have generally been implemented on separate wheelchairs, making comparison difficult. We present the design of an omnidirectional electric wheelchair that can be controlled using two sensing seats detecting changes in the centre of gravity. One of the sensing seats is used by the person on the wheelchair, whereas the other is used as a remote control by a second person. We explore the use of the wheelchair using different control paradigms (independent, remote, and shared) from both the wheelchair and the remote control seat with 5 dyads and 1 triad of participants, including wheelchair users and non. Results highlight key advantages and disadvantages of the SeaHare in different paradigms, with participants’ perceptions affected by their skills and lived experiences, and reflections on how different control modes might suit different scenarios. https://dl.acm.org/doi/10.1145/3663548.3675657

    Show More Show Less
    13 mins