The Trust Lab is leading research on the interaction between humans and humanlike machines

Robots and autonomous agents are being designed with increasingly human traits so that they can collaborate better in human-robot teams. However, we do not currently understand the psychological impact of dealing with automation that has "humanness". Even without anthropomorphic characteristics, people often humanize decidedly non-human robots. As robots quickly transition from novelties to collaborators in the workplace, The Trust Lab seeks to understand the psychological implications of humans working alongside humanlike machines.

Director: Dr. Ewart de Visser

Ewart de Visser

Dr. Ewart de Visser is the Director of Human Factors and UX Research at Perceptronics Solutions, the Director of the Trust Lab and an expert in human-automation collaboration design and research. For the last 10 years, he has specialized in developing advanced adaptive planning and decision aiding interfaces to support unmanned vehicle systems. Dr. de Visser has created novel design methodologies for developing unmanned vehicle interface support, metrics to assess human-robot performance, and developed new paradigms to support operator trust in complex automated systems. Dr. de Visser has also explored human-automation trust from different theoretical perspectives—cognitive, social, and neural. His Ph.D. dissertation dealt directly with the neuroergonomics of trust in automated systems for which he won an American Psychological Association Research Dissertation Award. Dr. de Visser has published numerous papers on his trust research and his human-automation design work. Dr. de Visser received his Ph.D. in applied cognitive psychology from George Mason University and his B.A. in Film Studies from the University of North Carolina at Wilmington. He also received his propedeuse in Cognitive Artificial Intelligence (CKI) from Utrecht University in the Netherlands.

RESEARCH TOPICS

We investigate how humans can develop healthy, productive, and fun relationships with artificial intelligence.

  • Trust in Human-Robot Teams
  • Anthropomorphism and Trust
  • Politeness and Etiquette
  • Trust Repair in Autonomous Systems
  • Neural Correlates of Trust
  • Trust Cues

Trust in Human-Robot Teams

Robots are becoming increasingly common in industries ranging from medical, to military, to our own homes.  Robots can lessen the workload of their human teammates for tasks that are dangerous, dull, or dirty, allowing humans to focus on more complex and exciting tasks. A major challenge to creating good human-robot teams is that individuals must be willing to trust these agents and give them responsibility before they will become effective teammates. Our research investigates the causes and effects of trust between humans and autonomous teammates. Measuring and studying how robots and human can coordinate, communicate, and support one another helps to realize the vision of human-robot teams that equal or outperform the best human-human teams.

  • McKendrick, R., Shaw, T., de Visser, E. J., Saqer, H., Kidwell, B., & Parasuraman, R. (2013). Team performance in networked supervisory control of unmanned air vehicles effects of automation, working memory, and communication content. Human Factors: The Journal of the Human Factors and Ergonomics Society, 0018720813496269.
  • Hancock, P. A., Billings, D. R., Schaefer, K. E., Chen, J. Y., de Visser, E. J., & Parasuraman, R. (2011). A meta-analysis of factors affecting trust in human-robot interaction. Human Factors: The Journal of the Human Factors and Ergonomics Society, 53(5), 517-527.
  • Freedy, A., de Visser, E. J., Weltman, G., & Coeyman, N. (2007, May). Measurement of trust in human-robot collaboration. In Collaborative Technologies and Systems, 2007. CTS 2007. International Symposium on(pp. 106-114). IEEE.
  • de Visser, E. J., & Parasuraman, R. (2011). Adaptive aiding of human-robot teaming effects of imperfect automation on performance, trust, and workload. Journal of Cognitive Engineering and Decision Making, 5(2), 209-231.
  • Parasuraman, R., Cosenzo, K. A., & de Visser, E. (2009). Adaptive automation for human supervision of multiple uninhabited vehicles: Effects on change detection, situation awareness, and mental workload. Military Psychology, 21(2), 270.
  • de Visser, E. J., Parasuraman, R., Freedy, A., Freedy, E., & Weltman, G. (2006, October). A comprehensive methodology for assessing human-robot team performance for use in training and simulation. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting (Vol. 50, No. 25, pp. 2639-2643). SAGE Publications.
 

Anthropomorphism and Trust

Between advances in computer-generated graphics and bio-mimicry robotics, many robots and agents of the near future will appear human-like, or display “anthropomorphic” features. Artificial agents that appear human may be treated differently than robots that are mechanical in appearance— in some instances they may be treated like humans. In other cases, these agents may be perceived as weird and untrustworthy. Anthropomorphic cues range from the obvious —such as eyes on a robot’s face— to subtle —such as eye movements, gestures, and facial expressions— and can quickly lead us to believe that an artificial agent is human-like. Such beliefs can change how we trust and act towards artificial agents. Our research seeks to understand how anthropomorphism affects behavior and performance, and inform the future design of our artificial teammates.

  • de Visser, E. J., Monfort, S. S., McKendrick, R., Smith, M. A., McKnight, P. E., Krueger, F., & Parasuraman, R. (2016). Almost Human: Anthropomorphism Increases Trust Resilience in Cognitive Agents. Journal of experimental Psychology, Applied.
  • Muralidharan, L., de Visser, E. J., & Parasuraman, R. (2014, April). The effects of pitch contour and flanging on trust in speaking cognitive agents. In CHI'14 Extended Abstracts on Human Factors in Computing Systems (pp. 2167-2172). ACM.
  • de Visser, E. J., Krueger, F., McKnight, P., Scheid, S., Smith, M., Chalk, S., & Parasuraman, R. (2012, September). The world is not enough: Trust in cognitive agents. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting (Vol. 56, No. 1, pp. 263-267). Sage Publications.
 

Politeness and Etiquette

As robots adopt increasingly complex roles in every-day life, they must adopt increasingly complex social skills to interact in a human world.  Some robots have already adopted semi-complex social interaction: the industrial robot Baxter uses digital eyes to communicate attention and confusion with its human teammates, Lowe’s OSHbots verbally interact with customers as they shop, and Honda’s ASIMO moves out of the path of incoming foot traffic. In the near future, service robots may be even more ubiquitous, utilizing social skills far beyond that of current agents such Apple’s Siri and Amazon’s Alexa. If appropriately implemented, social skills such as politeness and etiquette have significant positive effects on robot-to-human interaction— our research has found that proper automation etiquette increases trust, situation awareness, and performance. Etiquette may also contribute to users perceiving agents as being more human, which increases trust resilience and forgiveness even with imperfect advice.

  • de Visser, E. J., Shaw, T., Rovira, E., & Parasuraman, R. (2009). Could you be a little nicer? Pushing the right buttons with automation etiquette. In Proceedings of the 17th International Ergonomics Association Meeting.
  • de Visser, E. J., & Parasuraman, R. (2010). A Neuroergonomic Perspective on Human-Automation Etiquette and Trust. In T. Marek, W. Karwowski & V. Rice (Eds.), Advances in Understanding Human Performance: Neuroergonomics, Human Factors Design, and Special Populations (211-219). Orlando: CRC.
  • de Visser, E. J., & Parasuraman, R. (2010). The Social Brain: Behavioral, Computational, and Neuroergonomic Perspectives. In C. C. Hayes & C. A. Miller (Eds.), Human-Computer Etiquette (263-288). Boca Raton: Auerbach.
  • de Visser, E. J., Krueger, F., McKnight, P., Scheid, S., Smith, M., Chalk, S., & Parasuraman, R. (2012, September). The world is not enough: Trust in cognitive agents. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting (Vol. 56, No. 1, pp. 263-267). Sage Publications.
 

Trust Repair in Autonomous Systems

Autonomous systems will inevitably make mistakes just like humans do, but unlike humans, most machines do not have the ability to apologize for their mistakes or explain why errors occurred. As a result, mistakes can cause perfectly functional systems to be critically underutilized. Our lab is studying methods to give machines the capability to appropriately regain trust after errors, gaining some of the same interpersonal repair skills that humans have. This research includes studying different types of error-repair matches across a range of interaction contexts. We are particularly focused on autonomous systems, such as self-driving cars. This novel technology has the potential to dramatically change society, but potential passengers are frequently distrustful.  If this technology is to be widely adopted, self-driving cars must be able to properly respond to errors before passengers will be comfortable with autonomous vehicles.  Through this work, we hope to understand which responses and repairs are most helpful for each situation, while avoiding excessive trust that can lead to dangerous over-reliance.

  • Marinaccio, K., Kohn, S., Parasuraman, R., & de Visser, E. J. (2015, June). A Framework for Rebuilding Trust in Social Automation Across Health-Care Domains. In Proceedings of the International Symposium on Human Factors and Ergonomics in Health Care (Vol. 4, No. 1, pp. 201-205). SAGE Publications.
 

Neural Correlates of Trust

Humans' tendency to trust computers, robots, or other humans is driven by a complex system of physical brain structures and chemicals. Our research seeks to capture the effects of these systems. Recent topics include mapping the brain areas responsible for trust, and understanding how the peptide Oxytocin affects interaction between machines and humans. Understanding these neural correlates enables us to better understand and control the factors that influence trust. As automated agents become increasingly social, this work may be applied to create agents that are less likely to be subject to misuse or disuse.

  • de Visser, E. J., & Parasuraman, R. (2010). A Neuroergonomic Perspective on Human-Automation Etiquette and Trust. In T. Marek, W. Karwowski & V. Rice (Eds.), Advances in Understanding Human Performance: Neuroergonomics, Human Factors Design, and Special Populations (211-219). Orlando: CRC.
  • Parasuraman, R., de Visser, E. J., Wiese, E., & Madhavan, P. (2014, September). Human Trust in Other Humans, Automation, Robots, and Cognitive Agents Neural Correlates and Design Implications. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting (Vol. 58, No. 1, pp. 340-344). SAGE Publications.
  • Krueger, F., Parasuraman, R., Moody, L., Twieg, P., de Visser, E. J., McCabe, K., ... & Lee, M. R. (2013). Oxytocin selectively increases perceptions of harm for victims but not the desire to punish offenders of criminal offenses. Social cognitive and affective neuroscience, 8(5), 494-498.
  • Goodyear, K., Parasuraman, R., Chernyak, S., de Visser, E. J., Madhavan, P., Deshpande, G., & Krueger, F. (2016). An fMRI and effective connectivity study investigating miss errors during advice utilization from human and machine agents. Social Neuroscience.
 

Trust Cues

Our research has included mapping factors that can assist users’ understanding of how the “black box” of automation comes to complex decisions. Decision support systems can be extremely complex, incorporating multiple data streams and becoming increasingly difficult for any given operator to comprehend how the system comes to a decision. To provide a solution, we developed a design methodology for creating trust cues that help operators calibrate their perceived trust in a system closer to its actual trustworthiness. A true cue is any information that informs the user about the trustworthiness of an agent such as what the agent is doing, how it is doing it, and what goals it has. With these cues, users can better calibrate trust, which can lead towards optimal decision-making with reduced workload. 

  • de Visser, E. J., Cohen, M., Freedy, A., & Parasuraman, R. (2014, June). A design methodology for trust cue calibration in cognitive agents. In International Conference on Virtual, Augmented and Mixed Reality (pp. 251-262). Springer International Publishing.
  • de Visser, E. J., Dorfman, A., Cohen, M., Srivastava, N., Eck, C., Hassell, S. (2015). CyberViz: A Tool for Trustworthiness Visualization of Projected Cyber Threats. In Proceedings of the IEEE VIS Vizsec Workshop.

2016

 

de Visser, E. J., Monfort, S. S., McKendrick, R., Smith, M. A. B., McKnight, P. E., Krueger, F., & Parasuraman, R. (2016, August 8). Almost Human: Anthropomorphism Increases Trust Resilience in Cognitive Agents. Journal of Experimental Psychology: Applied.

de Visser, E. J., Dorfman, A., Chartrand, D., Lamon, J., Freedy, E., & Weltman, G. (2016). Building resilience with the stress resilience training system: Design validation and applications. Work, (Preprint), 1-16

Walliser, J.C., de Visser, E.J., & Shaw, T.H. (2016). Application of a system-wide trust strategy when supervising multiple autonomous agents. In Proceedings of the 60th Human Factors and Ergonomics Society Annual Meeting.

Goodyear, K., Parasuraman, R., Chernyak, S., de Visser, E. J., Madhavan, P., Deshpande, G., & Krueger, F. (2016). An fMRI and effective connectivity study investigating miss errors during advice utilization from human and machine agents. Social Neuroscience.

2015

 

de Visser, E. J., Freedy, E., Payne, J. J., & Freedy, A. (2015). AREA: A Mobile Application for Rapid Epidemiology Assessment. Procedia Engineering, 107, 357-365.

Monfort, S. S.de Visser, E. J., Denton, D., & Cohen, M. (2015). Size, complexity, and organization Assessing user error in Bayesian networks and influence diagrams. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting (Vol. 59, No. 1, pp. 836-840). SAGE Publications.

Marinaccio, K., Kohn, S., Parasuraman, R., & de Visser, E. J. (2015, June). A Framework for Rebuilding Trust in Social Automation Across Health-Care Domains. In Proceedings of the International Symposium on Human Factors and Ergonomics in Health Care (Vol. 4, No. 1, pp. 201-205). SAGE Publications.

de Visser, E. J., Dorfman, A., Cohen, M., Srivastava, N., Eck, C., Hassell, S. (2015). CyberViz: A Tool for Trustworthiness Visualization of Projected Cyber Threats. In Proceedings of the IEEE VIS Vizsec Workshop.

2014

 

Parasuraman, R., de Visser, E. J., Wiese, E., & Madhavan, P. (2014, September). Human Trust in Other Humans, Automation, Robots, and Cognitive Agents Neural Correlates and Design Implications. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting (Vol. 58, No. 1, pp. 340-344). SAGE Publications.

de Visser, E. J., Cohen, M., Freedy, A., & Parasuraman, R. (2014). A design methodology for trust cue calibration in cognitive agents. In Virtual, Augmented and Mixed Reality. Designing and Developing Virtual and Augmented Environments (pp. 251-262). Springer International Publishing.

Muralidharan, L., de Visser, E. J., & Parasuraman, R. (2014, April). The effects of pitch contour and flanging on trust in speaking cognitive agents. In CHI'14 Extended Abstracts on Human Factors in Computing Systems (pp. 2167-2172). ACM.

Ahmed, N., de Visser, E., Shaw, T., Parasuraman, R., Mohammed-Amin, A., & Campbell, M. (2014, March). A Look at Probabilistic Gaussian Process, Bayes Net, and Classifier Models for Prediction and Verification of Human Supervisory Performance. In 2014 AAAI Spring Symposium Series.

Ahmed, N., de Visser, E. J., Shaw, T., Mohamed-Ameen, A., Campbell, M., & Parasuraman, R. (2014). Statistical modelling of networked human-automation performance using working memory capacity. Ergonomics, 57(3), 295-318.

2013

 

Smith, M. A., Woo, H. J., Parker, J. P., Youmans, R. J., LeGoullon, M., Weltman, G., & de Visser, E. J. (2013, September). Using Iterative Design and Testing Towards the Development of SRTS® A Mobile, Game-Based Stress Resilience Training System. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting (Vol. 57, No. 1, pp. 2076-2080). SAGE Publications.

de Visser, E. J., Kidwell, B., Payne, J., Lu, L., Parker, J., Brooks, N., ... & Parasuraman, R. (2013, September). Best of Both Worlds Design and Evaluation of an Adaptive Delegation Interface. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting (Vol. 57, No. 1, pp. 255-259). SAGE Publications.

McKendrick, R., Shaw, T., de Visser, E. J., Saqer, H., Kidwell, B., & Parasuraman, R. (2013). Team performance in networked supervisory control of unmanned air vehicles effects of automation, working memory, and communication content. Human Factors: The Journal of the Human Factors and Ergonomics Society, 0018720813496269.

Krueger, F., Parasuraman, R., Moody, L., Twieg, P., de Visser, E. J., McCabe, K., ... & Lee, M. R. (2013) Oxytocin selectively increases perceptions of harm for victims but not the desire to punish offenders of criminal offenses. Social cognitive and affective neuroscience, 8(5), 494-498.

Brooks, N., de Visser, E. J., Chabuk, T., Freedy, E., & Scerri, P. (2013, May). An approach to team programming with markup for operator interaction. In Proceedings of the 2013 international conference on Autonomous agents and multi-agent systems (pp. 1355-1356). International Foundation for Autonomous Agents and Multiagent Systems.

2012

 

de Visser, E. J., Krueger, F., McKnight, P., Scheid, S., Smith, M., Chalk, S., & Parasuraman, R. (2012, September). The world is not enough: Trust in cognitive agents. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting (Vol. 56, No. 1, pp. 263-267). Sage Publications.

de Visser, E. J., & Krueger, F. (2012). Interpersonal trust as a dynamic belief?. The Neural Basis of Human Belief Systems, 95-110.

Parasuraman, R., de Visser, E. J., Lin, M. K., & Greenwood, P. M. (2012). Dopamine beta hydroxylase genotype identifies individuals less susceptible to bias in computer-assisted decision making. PloS one, 7(6), e39675.

de Visser, E. J., Jacobs, R., Chabuk, T., Freedy, A., & Scerri, P. (2012). Design and evaluation of the Adaptive Interface Management System (AIMS) for collaborative mission planning with unmanned vehicles. In Infotech@ Aerospace 2012 (p. 2528).

Saqer, H., de Visser, E. J., Strohl, J., & Parasuraman, R. (2012). Distractions N’ Driving: video game simulation educates young drivers on the dangers of texting while driving. Work, 41(Supplement 1), 5877-5879.

2011

 

Hancock, P. A., Billings, D. R., Schaefer, K. E., Chen, J. Y., de Visser, E. J., & Parasuraman, R. (2011). A meta-analysis of factors affecting trust in human-robot interaction. Human Factors: The Journal of the Human Factors and Ergonomics Society, 53(5), 517-527.

McKendrick, R., Shaw, T., Saqer, H., de Visser, E. J., & Parasuraman, R. (2011, September). Team performance and communication within networked supervisory control human-machine systems. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting (Vol. 55, No. 1, pp. 262-266). SAGE Publications.

Saqer, H., de Visser, E. J., Emfield, A., Shaw, T., & Parasuraman, R. (2011, September). Adaptive Automation to Improve Human Performance in Supervision of Multiple Uninhabited Aerial Vehicles Individual Markers of Performance. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting (Vol. 55, No. 1, pp. 890-893). SAGE Publications.

Miller, C. A., Shaw, T. H., Hamell, J. D., Emfield, A., Musliner, D. J., de Visser, E. J., & Parasurman, R. (2011). Delegation to automation: performance and implications in non-optimal situations. In Engineering Psychology and Cognitive Ergonomics (pp. 322-331). Springer Berlin Heidelberg.

de Visser, E. J., & Parasuraman, R. (2011). Adaptive aiding of human-robot teaming effects of imperfect automation on performance, trust, and workload. Journal of Cognitive Engineering and Decision Making, 5(2), 209-231.

2010

 

Shaw, T., Parasuraman, R., Guagliardo, L., & de Visser, E. J. (2010). Towards Adaptive Automation: A Neuroergonomic Approach to Measuring Workload During a Command and Control Task. In T. Marek, W. Karwowski & V. Rice (Eds.), Advances in Understanding Human Performance: Neuroergonomics, Human Factors Design, and Special Populations (52-61). Orlando: CRC.

de Visser, E. J., & Parasuraman, R. (2010). A Neuroergonomic Perspective on Human-Automation Etiquette and Trust. In T. Marek, W. Karwowski & V. Rice (Eds.), Advances in Understanding Human Performance: Neuroergonomics, Human Factors Design, and Special Populations (211-219). Orlando: CRC.

de Visser, E. J., & Parasuraman, R. (2010). The Social Brain: Behavioral, Computational, and Neuroergonomic Perspectives. In C. C. Hayes & C. A. Miller (Eds.), Human-Computer Etiquette (263-288). Boca Raton: Auerbach.

Shaw, T. H., Guagliardo, L., de Visser, E. J., & Parasuraman, R. (2010, September). Using transcranial doppler sonography to measure cognitive load in a command and control task. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting (Vol. 54, No. 3, pp. 249-253). SAGE Publications.

de Visser, E. J., Shaw, T., Mohamed-Ameen, A., & Parasuraman, R. (2010, September). Modeling human-automation team performance in networked systems: Individual differences in working memory count. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting (Vol. 54, No. 14, pp. 1087-1091). SAGE Publications.

Shaw, T., Emfield, A., Garcia, A., de Visser, E. J., Miller, C., Parasuraman, R., & Fern, L. (2010, September). Evaluating the benefits and potential costs of automation delegation for supervisory control of multiple UAVs. InProceedings of the Human Factors and Ergonomics Society Annual Meeting(Vol. 54, No. 19, pp. 1498-1502). SAGE Publications.

de Visser, E. J., LeGoullon, M., Horvath, D., Weltman, G., Freedy, A., Durlach, P., & Parasuraman, R. (2010, March). TECRA: C2 application of adaptive automation theory. In Aerospace Conference, 2010 IEEE (pp. 1-12). IEEE.

Cosenzo, K., Parasuraman, R., & de Visser, E. J. (2010). Automation strategies for facilitating human interaction with military unmanned vehicles. Human-robot interactions in future military operations, 103-124.

Jacobs, B., de Visser, E. J., Freedy, A., & Scerri, P. (2010). Application of Intelligent Aiding to Enable Single Operator Multiple UAV Supervisory Control.

2009

 

Parasuraman, R., Cosenzo, K. A., & de Visser, E. J. (2009). Adaptive automation for human supervision of multiple uninhabited vehicles: Effects on change detection, situation awareness, and mental workload. Military Psychology, 21(2), 270.

Parasuraman, R., de Visser, E. J., Clarke, E., McGarry, W. R., Hussey, E., Shaw, T., & Thompson, J. C. (2009). Detecting threat-related intentional actions of others: effects of image quality, response mode, and target cuing on vigilance. Journal of Experimental Psychology: Applied, 15(4), 275.

de Visser, E. J., Shaw, T., Rovira, E., & Parasuraman, R. (2009). Could you be a little nicer? Pushing the right buttons with automation etiquette. In Proceedings of the 17th International Ergonomics Association Meeting.

2008

 

de Visser, E. J., LeGoullon, M., Freedy, A., Freedy, E., Weltman, G., & Parasuraman, R. (2008, September). Designing an adaptive automation system for human supervision of unmanned vehicles: A bridge from theory to practice. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting (Vol. 52, No. 4, pp. 221-225). SAGE Publications.

Cohen, M. S., Sert, O., LeGoullon, M., de Visser, E. J., Freedy, A., Weltman, G., & Cummings, M. (2008, May). Cognition and game theory in the design of a collaboration manager for shared control of multiple UV assets. In Collaborative Technologies and Systems, 2008. CTS 2008. International Symposium on (pp. 513-523). IEEE.

de Visser, E. J., Cohen, M. S., Le Goullon, M., Sert, O., Freedy, A., Freedy, E., ... & Parasuraman, R. (2008). A Design Methodology for Controlling, Monitoring, and Allocating Unmanned Vehicles. In Third International Conference on Human Centered Processes (HCP-2008). Netherlands: Delft.

2007

 

de Visser, E. J., Parasuraman, R., Freedy, A., Freedy, E., & Weltman, G. (2007, October). Evaluating Situation Awareness in Human-Robot Teams. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting(Vol. 51, No. 18, pp. 1061-1064). SAGE Publications.

de Visser, E. J., & Parasuraman, R. (2007, October). Effects of imperfect automation and task load on human supervision of multiple uninhabited vehicles. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting (Vol. 51, No. 18, pp. 1081-1085). SAGE Publications.

Freedy, A., de Visser, E. J., Weltman, G., & Coeyman, N. (2007, May). Measurement of trust in human-robot collaboration. In Collaborative Technologies and Systems, 2007. CTS 2007. International Symposium on(pp. 106-114). IEEE.

Freedy, A., de Visser, E. J., Weltman, G., & Coeyman, N. (2007). Mixed Initiative Team Performance Assessment System (MITPAS) For Training and Operation. In Interservice/Industry Training, Simulation and Education Conference (I/ITSEC) (Vol. 7398, pp. 1-10).

2006

 

de Visser, E. J., Parasuraman, R., Freedy, A., Freedy, E., & Weltman, G. (2006, October). A comprehensive methodology for assessing human-robot team performance for use in training and simulation. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting (Vol. 50, No. 25, pp. 2639-2643). SAGE Publications.