RESEARCH TOPICS

We investigate how humans can develop healthy, productive, and fun relationships with artificial intelligence.

  • Trust in Human-Robot Teams
  • Anthropomorphism and Trust
  • Politeness and Etiquette
  • Trust Repair in Autonomous Systems
  • Neural Correlates of Trust
  • Trust Cues

Trust in Human-Robot Teams

Robots are becoming increasingly common in industries ranging from medical, to military, to our own homes.  Robots can lessen the workload of their human teammates for tasks that are dangerous, dull, or dirty, allowing humans to focus on more complex and exciting tasks. A major challenge to creating good human-robot teams is that individuals must be willing to trust these agents and give them responsibility before they will become effective teammates. Our research investigates the causes and effects of trust between humans and autonomous teammates. Measuring and studying how robots and human can coordinate, communicate, and support one another helps to realize the vision of human-robot teams that equal or outperform the best human-human teams.

  • McKendrick, R., Shaw, T., de Visser, E. J., Saqer, H., Kidwell, B., & Parasuraman, R. (2013). Team performance in networked supervisory control of unmanned air vehicles effects of automation, working memory, and communication content. Human Factors: The Journal of the Human Factors and Ergonomics Society, 0018720813496269.
  • Hancock, P. A., Billings, D. R., Schaefer, K. E., Chen, J. Y., de Visser, E. J., & Parasuraman, R. (2011). A meta-analysis of factors affecting trust in human-robot interaction. Human Factors: The Journal of the Human Factors and Ergonomics Society, 53(5), 517-527.
  • Freedy, A., de Visser, E. J., Weltman, G., & Coeyman, N. (2007, May). Measurement of trust in human-robot collaboration. In Collaborative Technologies and Systems, 2007. CTS 2007. International Symposium on(pp. 106-114). IEEE.
  • de Visser, E. J., & Parasuraman, R. (2011). Adaptive aiding of human-robot teaming effects of imperfect automation on performance, trust, and workload. Journal of Cognitive Engineering and Decision Making, 5(2), 209-231.
  • Parasuraman, R., Cosenzo, K. A., & de Visser, E. (2009). Adaptive automation for human supervision of multiple uninhabited vehicles: Effects on change detection, situation awareness, and mental workload. Military Psychology, 21(2), 270.
  • de Visser, E. J., Parasuraman, R., Freedy, A., Freedy, E., & Weltman, G. (2006, October). A comprehensive methodology for assessing human-robot team performance for use in training and simulation. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting (Vol. 50, No. 25, pp. 2639-2643). SAGE Publications.
 

Anthropomorphism and Trust

Between advances in computer-generated graphics and bio-mimicry robotics, many robots and agents of the near future will appear human-like, or display “anthropomorphic” features. Artificial agents that appear human may be treated differently than robots that are mechanical in appearance— in some instances they may be treated like humans. In other cases, these agents may be perceived as weird and untrustworthy. Anthropomorphic cues range from the obvious —such as eyes on a robot’s face— to subtle —such as eye movements, gestures, and facial expressions— and can quickly lead us to believe that an artificial agent is human-like. Such beliefs can change how we trust and act towards artificial agents. Our research seeks to understand how anthropomorphism affects behavior and performance, and inform the future design of our artificial teammates.

  • de Visser, E. J., Monfort, S. S., McKendrick, R., Smith, M. A., McKnight, P. E., Krueger, F., & Parasuraman, R. (2016). Almost Human: Anthropomorphism Increases Trust Resilience in Cognitive Agents. Journal of experimental Psychology, Applied.
  • Muralidharan, L., de Visser, E. J., & Parasuraman, R. (2014, April). The effects of pitch contour and flanging on trust in speaking cognitive agents. In CHI'14 Extended Abstracts on Human Factors in Computing Systems (pp. 2167-2172). ACM.
  • de Visser, E. J., Krueger, F., McKnight, P., Scheid, S., Smith, M., Chalk, S., & Parasuraman, R. (2012, September). The world is not enough: Trust in cognitive agents. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting (Vol. 56, No. 1, pp. 263-267). Sage Publications.
 

Politeness and Etiquette

As robots adopt increasingly complex roles in every-day life, they must adopt increasingly complex social skills to interact in a human world.  Some robots have already adopted semi-complex social interaction: the industrial robot Baxter uses digital eyes to communicate attention and confusion with its human teammates, Lowe’s OSHbots verbally interact with customers as they shop, and Honda’s ASIMO moves out of the path of incoming foot traffic. In the near future, service robots may be even more ubiquitous, utilizing social skills far beyond that of current agents such Apple’s Siri and Amazon’s Alexa. If appropriately implemented, social skills such as politeness and etiquette have significant positive effects on robot-to-human interaction— our research has found that proper automation etiquette increases trust, situation awareness, and performance. Etiquette may also contribute to users perceiving agents as being more human, which increases trust resilience and forgiveness even with imperfect advice.

  • de Visser, E. J., Shaw, T., Rovira, E., & Parasuraman, R. (2009). Could you be a little nicer? Pushing the right buttons with automation etiquette. In Proceedings of the 17th International Ergonomics Association Meeting.
  • de Visser, E. J., & Parasuraman, R. (2010). A Neuroergonomic Perspective on Human-Automation Etiquette and Trust. In T. Marek, W. Karwowski & V. Rice (Eds.), Advances in Understanding Human Performance: Neuroergonomics, Human Factors Design, and Special Populations (211-219). Orlando: CRC.
  • de Visser, E. J., & Parasuraman, R. (2010). The Social Brain: Behavioral, Computational, and Neuroergonomic Perspectives. In C. C. Hayes & C. A. Miller (Eds.), Human-Computer Etiquette (263-288). Boca Raton: Auerbach.
  • de Visser, E. J., Krueger, F., McKnight, P., Scheid, S., Smith, M., Chalk, S., & Parasuraman, R. (2012, September). The world is not enough: Trust in cognitive agents. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting (Vol. 56, No. 1, pp. 263-267). Sage Publications.
 

Trust Repair in Autonomous Systems

Autonomous systems will inevitably make mistakes just like humans do, but unlike humans, most machines do not have the ability to apologize for their mistakes or explain why errors occurred. As a result, mistakes can cause perfectly functional systems to be critically underutilized. Our lab is studying methods to give machines the capability to appropriately regain trust after errors, gaining some of the same interpersonal repair skills that humans have. This research includes studying different types of error-repair matches across a range of interaction contexts. We are particularly focused on autonomous systems, such as self-driving cars. This novel technology has the potential to dramatically change society, but potential passengers are frequently distrustful.  If this technology is to be widely adopted, self-driving cars must be able to properly respond to errors before passengers will be comfortable with autonomous vehicles.  Through this work, we hope to understand which responses and repairs are most helpful for each situation, while avoiding excessive trust that can lead to dangerous over-reliance.

  • Marinaccio, K., Kohn, S., Parasuraman, R., & de Visser, E. J. (2015, June). A Framework for Rebuilding Trust in Social Automation Across Health-Care Domains. In Proceedings of the International Symposium on Human Factors and Ergonomics in Health Care (Vol. 4, No. 1, pp. 201-205). SAGE Publications.
 

Neural Correlates of Trust

Humans' tendency to trust computers, robots, or other humans is driven by a complex system of physical brain structures and chemicals. Our research seeks to capture the effects of these systems. Recent topics include mapping the brain areas responsible for trust, and understanding how the peptide Oxytocin affects interaction between machines and humans. Understanding these neural correlates enables us to better understand and control the factors that influence trust. As automated agents become increasingly social, this work may be applied to create agents that are less likely to be subject to misuse or disuse.

  • de Visser, E. J., & Parasuraman, R. (2010). A Neuroergonomic Perspective on Human-Automation Etiquette and Trust. In T. Marek, W. Karwowski & V. Rice (Eds.), Advances in Understanding Human Performance: Neuroergonomics, Human Factors Design, and Special Populations (211-219). Orlando: CRC.
  • Parasuraman, R., de Visser, E. J., Wiese, E., & Madhavan, P. (2014, September). Human Trust in Other Humans, Automation, Robots, and Cognitive Agents Neural Correlates and Design Implications. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting (Vol. 58, No. 1, pp. 340-344). SAGE Publications.
  • Krueger, F., Parasuraman, R., Moody, L., Twieg, P., de Visser, E. J., McCabe, K., ... & Lee, M. R. (2013). Oxytocin selectively increases perceptions of harm for victims but not the desire to punish offenders of criminal offenses. Social cognitive and affective neuroscience, 8(5), 494-498.
  • Goodyear, K., Parasuraman, R., Chernyak, S., de Visser, E. J., Madhavan, P., Deshpande, G., & Krueger, F. (2016). An fMRI and effective connectivity study investigating miss errors during advice utilization from human and machine agents. Social Neuroscience.
 

Trust Cues

Our research has included mapping factors that can assist users’ understanding of how the “black box” of automation comes to complex decisions. Decision support systems can be extremely complex, incorporating multiple data streams and becoming increasingly difficult for any given operator to comprehend how the system comes to a decision. To provide a solution, we developed a design methodology for creating trust cues that help operators calibrate their perceived trust in a system closer to its actual trustworthiness. A true cue is any information that informs the user about the trustworthiness of an agent such as what the agent is doing, how it is doing it, and what goals it has. With these cues, users can better calibrate trust, which can lead towards optimal decision-making with reduced workload. 

  • de Visser, E. J., Cohen, M., Freedy, A., & Parasuraman, R. (2014, June). A design methodology for trust cue calibration in cognitive agents. In International Conference on Virtual, Augmented and Mixed Reality (pp. 251-262). Springer International Publishing.
  • de Visser, E. J., Dorfman, A., Cohen, M., Srivastava, N., Eck, C., Hassell, S. (2015). CyberViz: A Tool for Trustworthiness Visualization of Projected Cyber Threats. In Proceedings of the IEEE VIS Vizsec Workshop.