Mittu / Sofge / Wagner | Robust Intelligence and Trust in Autonomous Systems | E-Book | www.sack.de
E-Book

E-Book, Englisch, 277 Seiten

Mittu / Sofge / Wagner Robust Intelligence and Trust in Autonomous Systems


1. Auflage 2016
ISBN: 978-1-4899-7668-0
Verlag: Springer
Format: PDF
Kopierschutz: 1 - PDF Watermark

E-Book, Englisch, 277 Seiten

ISBN: 978-1-4899-7668-0
Verlag: Springer
Format: PDF
Kopierschutz: 1 - PDF Watermark



This volume explores the intersection of robust intelligence (RI) and trust in autonomous systems across multiple contexts among autonomous hybrid systems, where hybrids are arbitrary combinations of humans, machines and robots. To better understand the relationships between artificial intelligence (AI) and RI in a way that promotes trust between autonomous systems and human users, this book explores the underlying theory, mathematics, computational models, and field applications.  It uniquely unifies the fields of RI and trust and frames it in a broader context, namely the effective integration of human-autonomous systems.  A description of the current state of the art in RI and trust introduces the research work in this area.  With this foundation, the chapters further elaborate on key research areas and gaps that are at the heart of effective human-systems integration, including workload management, human computer interfaces, team integration and performance, advanced analytics, behavior modeling, training, and, lastly, test and evaluation. Written by international leading researchers from across the field of autonomous systems research, Robust Intelligence and Trust in Autonomous Systems dedicates itself to thoroughly examining the challenges and trends of systems that exhibit RI, the fundamental implications of RI in developing trusted relationships with present and future autonomous systems, and the effective human systems integration that must result for trust to be sustained.Contributing authors:  David W. Aha, Jenny Burke, Joseph Coyne, M.L. Cummings, Munjal Desai, Michael Drinkwater, Jill L. Drury, Michael W. Floyd, Fei Gao, Vladimir Gontar, Ayanna M. Howard, Mo Jamshidi, W.F. Lawless, Kapil Madathil, Ranjeev Mittu, Arezou Moussavi, Gari Palmer, Paul Robinette, Behzad Sadrfaridpour, Hamed Saeidi, Kristin E. Schaefer, Anne Selwyn, Ciara Sibley, Donald A. Sofge, Erin Solovey, Aaron Steinfeld, Barney Tannahill, Gavin Taylor, Alan R. Wagner, Yue Wang, Holly A. Yanco, Dan Zwillinger.

Contributing authors:  David W. Aha, Jenny Burke, Joseph Coyne, M.L. Cummings, Munjal Desai, Michael Drinkwater, Jill L. Drury, Michael W. Floyd, Fei Gao, Vladimir Gontar, Ayanna M. Howard, Mo Jamshidi, W.F. Lawless, Kapil Madathil, Ranjeev Mittu, Arezou Moussavi, Gari Palmer, Paul Robinette, Behzad Sadrfaridpour, Hamed Saeidi, Kristin E. Schaefer, Anne Selwyn, Ciara Sibley, Donald A. Sofge, Erin Solovey, Aaron Steinfeld, Barney Tannahill, Gavin Taylor, Alan R. Wagner, Yue Wang, Holly A. Yanco, Dan Zwillinger.  

Mittu / Sofge / Wagner Robust Intelligence and Trust in Autonomous Systems jetzt bestellen!

Weitere Infos & Material


1;Preface;6
1.1;AAAI-2014 Spring Symposium Organizers;7
1.2;AAAI-2014 Spring Symposium: Keynote Speakers;7
1.3;Symposium Program Committee;8
2;Contents;12
3;1 Introduction;14
3.1;1.1 The Intersection of Robust Intelligence (RI) and Trust in Autonomous Systems;14
3.2;1.2 Background of the 2014 Symposium;15
3.3;1.3 Contributed Chapters;17
3.4;References;22
4;2 Towards Modeling the Behavior of Autonomous Systems and Humans for Trusted Operations;24
4.1;2.1 Introduction;24
4.2;2.2 Understanding the Value of Context;26
4.3;2.3 Context and the Complexity of Anomaly Detection;26
4.3.1;2.3.1 Manifolds for Anomaly Detection;27
4.4;2.4 Reinforcement Learning for Anomaly Detection;28
4.4.1;2.4.1 Reinforcement Learning;29
4.4.2;2.4.2 Supervised Autonomy;30
4.4.3;2.4.3 Feature Identification and Selection;31
4.4.4;2.4.4 Approximation Error for Alarming and Analysis;32
4.4.5;2.4.5 Illustration;33
4.4.5.1;2.4.5.1 Synthetic Domain;33
4.4.5.2;2.4.5.2 Real-World Domain;35
4.5;2.5 Predictive and Prescriptive Analytics;39
4.6;2.6 Capturing User Interactions and Inference;39
4.7;2.7 Challenges and Opportunities;41
4.8;2.8 Summary;42
4.9;References;43
5;3 Learning Trustworthy Behaviors Using an Inverse Trust Metric;45
5.1;3.1 Introduction;45
5.2;3.2 Related Work;47
5.2.1;3.2.1 Human-Robot Trust;47
5.2.2;3.2.2 Behavior Adaptation;47
5.3;3.3 Agent Behavior;49
5.4;3.4 Inverse Trust Estimate;49
5.5;3.5 Trust-Guided Behavior Adaptation;51
5.5.1;3.5.1 Evaluated Behaviors;52
5.5.2;3.5.2 Behavior Adaptation;53
5.6;3.6 Evaluation;54
5.6.1;3.6.1 eBotworks Simulator;55
5.6.2;3.6.2 Experimental Conditions;55
5.6.3;3.6.3 Evaluation Scenarios;56
5.6.3.1;3.6.3.1 Movement Scenario;56
5.6.3.2;3.6.3.2 Patrolling Scenario;58
5.6.4;3.6.4 Trustworthy Behaviors;59
5.6.5;3.6.5 Efficiency;62
5.6.6;3.6.6 Discussion;63
5.7;3.7 Conclusions;63
5.8;References;64
6;4 The “Trust V”: Building and Measuring Trust in Autonomous Systems;66
6.1;4.1 Introduction;66
6.2;4.2 Autonomy, Automation, and Trust;68
6.3;4.3 Dimensions of Trust;73
6.3.1;4.3.1 Trust Dimensions Arising from Automated Systems Attributes;73
6.3.2;4.3.2 Trust Dimensions Arising from Autonomous Systems Attributes;74
6.3.3;4.3.3 Another Trust Dimension: SoS;74
6.4;4.4 Creating Trust;75
6.4.1;4.4.1 Building Trust In;76
6.5;4.5 The Systems Engineering V-Model;77
6.6;4.6 The Trust V-Model;78
6.6.1;4.6.1 The Trust V Representation: Graphic;79
6.6.2;4.6.2 The Trust V Representation: Array;80
6.6.3;4.6.3 Trust V “Toolbox”;81
6.7;4.7 Specific Trust Example: Chatter;83
6.8;4.8 Measures of Effectiveness;84
6.9;4.9 Conclusions and Next Steps;86
6.10;A.1 Appendix;87
6.11;References10;87
7;5 Big Data Analytic Paradigms: From Principle Component Analysis to Deep Learning;89
7.1;5.1 Introduction;89
7.2;5.2 Wind Data Description;90
7.3;5.3 Wind Power Forecasting Via Nonparametric Models;90
7.3.1;5.3.1 Advanced Neural Network Architectures Application;91
7.3.2;5.3.2 Wind Speed Results;93
7.4;5.4 Introduction to Deep Architectures;94
7.4.1;5.4.1 Training Deep Architectures;100
7.4.2;5.4.2 Training Restricted Boltzmann Machines;100
7.4.3;5.4.3 Training Autoencoders;102
7.5;5.5 Conclusions;104
7.6;References;105
8;6 Artificial Brain Systems Based on Neural Network Discrete Chaotic Dynamics. Toward the Development of Conscious and Rational Robots;106
8.1;6.1 Introduction;106
8.2;6.2 Background;108
8.3;6.3 Numerical Simulations;114
8.4;6.4 Conclusion;121
8.5;References;122
9;7 Modeling and Control of Trust in Human-Robot Collaborative Manufacturing;123
9.1;7.1 Introduction;123
9.2;7.2 Trust Model;126
9.2.1;7.2.1 Time-Series Trust Model for Dynamic HRC Manufacturing;126
9.2.2;7.2.2 Robot Performance Model;127
9.2.3;7.2.3 Human Performance Model;127
9.3;7.3 Neural Network Based Robust Intelligent Controller;129
9.4;7.4 Control Approaches: Intersection of Trust and Robust Intelligence;130
9.4.1;7.4.1 Manual Mode;131
9.4.2;7.4.2 Autonomous Mode;131
9.4.3;7.4.3 Collaborative Mode;132
9.5;7.5 Simulation;132
9.5.1;7.5.1 Manual Mode;133
9.5.2;7.5.2 Autonomous Mode;135
9.5.3;7.5.3 Collaborative Mode;135
9.5.4;7.5.4 Comparison of Control Schemes;135
9.6;7.6 Experimental Validation;136
9.6.1;7.6.1 Experimental Test Bed;136
9.6.2;7.6.2 Experimental Design;136
9.6.2.1;7.6.2.1 Experiment Scenario;137
9.6.2.2;7.6.2.2 Controlled Behavioral Study;139
9.6.2.3;7.6.2.3 Imposing Fatigue;139
9.6.2.4;7.6.2.4 Experiment Procedure;141
9.6.2.5;7.6.2.5 Measurements and Scales;141
9.6.3;7.6.3 Experimental Results;142
9.6.3.1;7.6.3.1 Trust Model Identification Procedure;142
9.6.3.2;7.6.3.2 Manual Mode;142
9.6.3.3;7.6.3.3 Autonomous Mode;143
9.6.3.4;7.6.3.4 Collaborative Mode;144
9.6.4;7.6.4 Comparison and Conclusion;145
9.7;7.7 Conclusion;147
9.8;References;147
10;8 Investigating Human-Robot Trust in Emergency Scenarios: Methodological Lessons Learned;150
10.1;8.1 Introduction;150
10.2;8.2 Conceptualizing Trust;151
10.2.1;8.2.1 Conditions for Situational Trust;153
10.3;8.3 Related Work on Trust and Robots;155
10.4;8.4 Crowdsourced Narratives in Trust Research;155
10.4.1;8.4.1 Iterative Development of Narrative Phrasing;157
10.5;8.5 Crowdsourced Robot Evacuation;162
10.5.1;8.5.1 Single Round Experimental Setup;162
10.5.2;8.5.2 Multi-Round Experimental Setup;163
10.5.3;8.5.3 Asking About Trust;164
10.5.4;8.5.4 Measuring Trust;165
10.5.5;8.5.5 Incentives to Participants;165
10.5.6;8.5.6 Communicating Failed Robot Behavior;168
10.6;8.6 Conclusion;170
10.7;References;171
11;9 Designing for Robust and Effective Teamwork in Human-Agent Teams;174
11.1;9.1 Introduction;174
11.2;9.2 Related Work;175
11.2.1;9.2.1 Team Structure;175
11.2.2;9.2.2 Shared Mental Model and Team Situation Awareness;176
11.2.3;9.2.3 Communication;177
11.3;9.3 Experiment 1: Team Structure and Robustness;178
11.3.1;9.3.1 Testbed;178
11.3.2;9.3.2 Experiment Design;180
11.3.3;9.3.3 Results;181
11.3.3.1;9.3.3.1 Duplicated Work;181
11.3.3.2;9.3.3.2 Under Utilization of Vehicles;183
11.3.3.3;9.3.3.3 Infrequent Communication;184
11.4;9.4 Experiment 2: Information-Sharing;185
11.4.1;9.4.1 Independent Variables;185
11.4.2;9.4.2 Dependent Variables;187
11.4.3;9.4.3 Participants;187
11.4.4;9.4.4 Procedure;188
11.4.5;9.4.5 Results;188
11.4.5.1;9.4.5.1 Team Performance;188
11.4.5.2;9.4.5.2 Team Coordination;190
11.4.5.3;9.4.5.3 Workload;193
11.4.5.4;9.4.5.4 User Preference and Comments;194
11.5;9.5 Discussion;195
11.6;9.6 Conclusion;195
11.7;References;196
12;10 Measuring Trust in Human Robot Interactions: Development of the “Trust Perception Scale-HRI”;198
12.1;10.1 Introduction;198
12.2;10.2 Creation of an Item Pool;200
12.3;10.3 Initial Item Pool Reduction;202
12.3.1;10.3.1 Experimental Method;203
12.3.2;10.3.2 Experimental Results;204
12.3.3;10.3.3 Key Findings and Changes;205
12.4;10.4 Content Validation;205
12.4.1;10.4.1 Experimental Method;206
12.4.2;10.4.2 Experimental Results;207
12.5;10.5 Task-Based Validity Testing: Does the Score Change Over Time with an Intervention?;210
12.5.1;10.5.1 Experimental Method;211
12.5.2;10.5.2 Experimental Results;212
12.5.2.1;10.5.2.1 Individual Item Analysis;212
12.5.2.2;10.5.2.2 Trust Score Validation;212
12.5.2.3;10.5.2.3 40 Items Versus 14 Items;214
12.6;10.6 Task-Based Validity Testing: Does the Scale Measure Trust?;215
12.6.1;10.6.1 Experimental Method;215
12.6.2;10.6.2 Experimental Results;216
12.6.2.1;10.6.2.1 Correlation Analysis of the Three Scales;216
12.6.2.2;10.6.2.2 Pre-post Interaction Analysis;217
12.6.2.3;10.6.2.3 Differences Across Scales and Conditions;218
12.6.3;10.6.3 Experimental Discussion;219
12.7;10.7 Conclusion;219
12.7.1;10.7.1 The Trust Perception Scale-HRI;219
12.7.2;10.7.2 Instruction for Use;221
12.7.3;10.7.3 Current and Future Applications;222
12.8;References;223
13;11 Methods for Developing Trust Models for Intelligent Systems;226
13.1;11.1 Introduction;226
13.2;11.2 Prior Work in the Development of Trust Models;228
13.2.1;11.2.1 Trust Models;230
13.2.2;11.2.2 Trust in Human-Robot Interaction (HRI);231
13.3;11.3 The Use of Surveys as a Method for Developing Trust Models;233
13.3.1;11.3.1 Methodology;234
13.3.2;11.3.2 Results and Discussion;235
13.3.3;11.3.3 Modeling Trust;242
13.4;11.4 Robot Studies as a Method for Developing Trust Models;243
13.4.1;11.4.1 Methodology;243
13.4.2;11.4.2 Results and Discussion;250
13.4.2.1;11.4.2.1 Reducing Situation Awareness (SA);250
13.4.2.2;11.4.2.2 Providing Feedback;251
13.4.2.3;11.4.2.3 Reducing Task Difficulty;253
13.4.2.4;11.4.2.4 Long-Term Interaction;254
13.4.2.5;11.4.2.5 Impact of Timing of Periods of Low Reliability;256
13.4.2.6;11.4.2.6 Impact of Age;256
13.4.3;11.4.3 Modeling Trust;257
13.5;11.5 Conclusions and Future Work;258
13.6;References;259
14;12 The Intersection of Robust Intelligence and Trust: Hybrid Teams, Firms and Systems;262
14.1;12.1 Introduction;262
14.1.1;12.1.1 Background;263
14.2;12.2 Theory;265
14.3;12.3 Outline of the Mathematics;267
14.3.1;12.3.1 Field Model;267
14.3.2;12.3.2 Interdependence;269
14.3.3;12.3.3 Incompleteness and Uncertainty;269
14.4;12.4 Evidence of Incompleteness for Groups;270
14.4.1;12.4.1 The Evidence from Studies of Organizations;271
14.4.2;12.4.2 Modeling Competing Groups with Limit Cycles;271
14.5;12.5 Gaps;273
14.6;12.6 Conclusions;274
14.7;References;275



Ihre Fragen, Wünsche oder Anmerkungen
Vorname*
Nachname*
Ihre E-Mail-Adresse*
Kundennr.
Ihre Nachricht*
Lediglich mit * gekennzeichnete Felder sind Pflichtfelder.
Wenn Sie die im Kontaktformular eingegebenen Daten durch Klick auf den nachfolgenden Button übersenden, erklären Sie sich damit einverstanden, dass wir Ihr Angaben für die Beantwortung Ihrer Anfrage verwenden. Selbstverständlich werden Ihre Daten vertraulich behandelt und nicht an Dritte weitergegeben. Sie können der Verwendung Ihrer Daten jederzeit widersprechen. Das Datenhandling bei Sack Fachmedien erklären wir Ihnen in unserer Datenschutzerklärung.