Calvaresi / Najjar / Omicini | Explainable and Transparent AI and Multi-Agent Systems | Buch | 978-3-031-70073-6 | sack.de

Buch, Englisch, 240 Seiten, Format (B × H): 155 mm x 235 mm

Reihe: Lecture Notes in Artificial Intelligence

Calvaresi / Najjar / Omicini

Explainable and Transparent AI and Multi-Agent Systems

6th International Workshop, EXTRAAMAS 2024, Auckland, New Zealand, May 6–10, 2024, Revised Selected Papers

Buch, Englisch, 240 Seiten, Format (B × H): 155 mm x 235 mm

Reihe: Lecture Notes in Artificial Intelligence

ISBN: 978-3-031-70073-6
Verlag: Springer


This volume constitutes the papers of several workshops which were held in conjunction with the 6th International Workshop on Explainable and Transparent AI and Multi-Agent Systems, EXTRAAMAS 2024, in Auckland, New Zealand, during May 6–10, 2024.

The 13 full papers presented in this book were carefully reviewed and selected from 25 submissions. The papers are organized in the following topical sections: User-centric XAI; XAI and Reinforcement Learning; Neuro-symbolic AI and Explainable Machine Learning; and XAI & Ethics.
Calvaresi / Najjar / Omicini Explainable and Transparent AI and Multi-Agent Systems jetzt bestellen!

Weitere Infos & Material


.- User-centric XAI.

.- Effect of Agent Explanations Using Warm and Cold Language on User Adoption of Recommendations for Bandit Problem.

.- Evaluation of the User-centric Explanation Strategies for Interactive Recommenders.

.- Can Interpretability Layouts Influence Human Perception of Offensive Sentences?.

.- A Framework for Explainable Multi-purpose Virtual Assistants: A Nutrition-Focused Case Study.

.- XAI and Reinforcement Learning.

.- Learning Temporal Task Specifications From Demonstrations.

.- Temporal Explanations for Deep Reinforcement Learning Agents.

.- An Adaptive Interpretable Safe-RL Approach for Addressing Smart Grid Supply-side Uncertainties.

.- Model-Agnostic Policy Explanations: Biased Sampling for Surrogate Models.

.- Neuro-symbolic AI and Explainable Machine Learning.

.- Explanation of Deep Learning Models via Logic Rules Enhanced by Embeddings Analysis, and Probabilistic Models.

.- py ciu image: a Python library for Explaining Image Classification with Contextual Importance and Utility.

.- Towards interactive and social explainable artificial intelligence for digital history.

.- XAI & Ethics.

.- Explainability and Transparency in Practice: A Comparison Between Corporate and National AI Ethics Guidelines in Germany and China.

.- The Wildcard XAI: from a Necessity, to a Resource, to a Dangerous Decoy.


Ihre Fragen, Wünsche oder Anmerkungen
Vorname*
Nachname*
Ihre E-Mail-Adresse*
Kundennr.
Ihre Nachricht*
Lediglich mit * gekennzeichnete Felder sind Pflichtfelder.
Wenn Sie die im Kontaktformular eingegebenen Daten durch Klick auf den nachfolgenden Button übersenden, erklären Sie sich damit einverstanden, dass wir Ihr Angaben für die Beantwortung Ihrer Anfrage verwenden. Selbstverständlich werden Ihre Daten vertraulich behandelt und nicht an Dritte weitergegeben. Sie können der Verwendung Ihrer Daten jederzeit widersprechen. Das Datenhandling bei Sack Fachmedien erklären wir Ihnen in unserer Datenschutzerklärung.