Reimer speaks at HMi 2014 Concepts & Systems

Reimer speaks at HMi 2014 Concepts & Systems

Wed, 07/09/2014

AgeLab research scientist Bryan Reimer spoke at HMi 2014: Concepts & Systems on June 26 in Berlin. The event focused on human machine interfaces (HMIs) and more specifically, HMI concepts for autonomous driving, augmented reality, voice control and speech recognition, and multimodal interface management.

Below is a description of his talk:

“Voice command” interfaces have been proposed as a means to allow drivers to engage with an expanding array of entertainment and connectivity options in the modern automobile while keeping their eyes on the road and hands on the steering wheel. The level of visual demand observed when drivers interact with representative systems suggest that many voice command interfaces are really best considered as multi-modal / mixed mode interfaces that draw upon a wide array of resources and types of attention. Aspects of an activity such as the duration of an interaction appear critical to assessing the total demand placed on driver resources and the degree to which a driver can appropriately manage an activity while driving.

Automated driving systems that aim to free the driver from longitudinal, lateral and managerial (strategic) control responsibility significantly alter the attentional demands of the driving task. The Yerkes–Dodson law of arousal suggests that the now “spare” attentional resources left untapped by other demands will leave drivers in a state of under-arousal (varying levels of inattention). In an effort to engage spare resources, drivers will likely undertake secondary activities and direct attention away from passive oversight of vehicle operation. While the degree to which a driver may be capable of resuming control at a given point remains difficult to predict, the resulting shift in the strategic allocation of resources necessitates a broader view of HMI demands as a component of attention management.

An empirical HMI evaluation system that emphasizes driver attention management across the various modalities of interaction (visual, auditory, haptic, vocal, manual, etc.) and allows the designer to better optimize the attentional demands of the input, processing and output modalities of a system with respect to the operating conditions, is critically needed. Current research efforts to develop such a system will be described.

 

MIT AgeLab
1 Main Street, 9th Floor
Cambridge, MA 02142
ph: 617.253.0753
email: agelabinfo(at)mit.edu

Go to top