Projects

Abstract: This research project is to develop methods for estimating and calibrating situational awareness in teams to optimize their overall performance. We hypothesize that there is a desired level of situation awareness (SA) for each human agent on the team that will create the team SA or “common picture”, and that this desired level depends on the context or environment in which the team is operating. We treat SA as a dynamic variable that can improve as more information becomes available and degrade over time, or decrease quickly as the environment changes.

Abstract: How can multiple teams composed of soldiers, soldier-operated vehicles, tele-operated vehicles, and autonomous vehicles effectively accomplish unit goals in dynamic environments? Answers to this question speak directly to what the Department of Defense (DoD) calls “revolutionary collaboration” where soldiers are expected to view machines as valuable and critical teammates collaborating with humans within hierarchical military unit structures. The premise of this project is that the success of MUM-T (manned-unmanned teaming) depends on maintaining shared situation awareness (SA) and repair trust in a Multiteam System (MTS) consisting of both manned and unmanned agents.

Abstract: In this project, we aim to predict drivers’ takeover readiness and design an adaptive alert system. The proposed research activity will be coordinated under three tasks: (1) Investigate drivers’ takeover performance in highly automated driving by conducting a human subject experiment, leveraging driving simulators at the U-M Transportation Research Institute. Participants’ physiological data and data from the driving scenario will be collected and serve as the input to the computational model. (2) Computationally predict drivers’ takeover readiness in real time by developing a deep-learning model formulated on the convolutional neural networks architecture. (3) Design and evaluate an adaptive in-vehicle alert system.

Abstract: Autonomous and semi-autonomous vehicles have the potential to help drivers successfully and safely get from point A to point B while additionally providing the drivers with the flexibility to address other pressing issues not possible while driving manually. Drivers need to be comfortable relying on the vehicle’s autonomy in order to make appropriate decisions regarding when or when not to employ it. Unfortunately, drivers have failed to fully leverage a vehicle’s capabilities because of an inappropriate degree of trust in the vehicle’s autonomy. This manifests itself in misuse (for example, over reliance) or outright disuse of the vehicle’s capabilities, in either case increasing the risk of the driving task. The goal of this project is to develop methods to predict (1) when the human is likely to take or give control of the driving to the vehicle’s autonomy, and (2) when the vehicle’s autonomy should take or give control of the driving back to the driver. 

Abstract: The autonomous features in vehicles has grown steadily both in numbers and in terms of functionality over the past decade. With the development of Advanced Driver Assistance Systems (ADAS) from Cruise Control, Lane change assist, etc., to the more advanced auto-pilot assistance systems, the creation of fully autonomous vehicles is not far away. Even with such technological improvements, the general public are still skeptical about autonomous vehicles. The goal of this project is to identify the factors that would significantly affect the trust in autonomous vehicles. This project aims to develop methods that can predict the perception of the autonomous vehicles by the people in its surroundings, mainly, pedestrians, cyclists and human drivers by modelling the trust interaction between them and their perceived risk in various situations. This model would be a generalized one, which could be applicable to any scenario and could be extended to the interactions between autonomous vehicles and their passengers or drivers.

Abstract: This project seeks to understand the set of expectations a driver has for an autonomous vehicle (AV), to develop a set of explanations AVs should provide to promote trust, and to understand how content, timing, and delivery impact the effectiveness of explanations. The research will match up explanations and their characteristics (i.e., content, length, timing, and delivery) with the appropriate expectation violation. We will conduct two user studies that will manipulate the content, timing, and mode of explanations. The user studies with have 30 participants, each interacting with the U-M Transportation Research Institute simulator.