Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Human-Centered Artificial Intelligence: Research and Applications
Human-Centered Artificial Intelligence: Research and Applications
Human-Centered Artificial Intelligence: Research and Applications
Ebook594 pages5 hours

Human-Centered Artificial Intelligence: Research and Applications

Rating: 0 out of 5 stars

()

Read preview

About this ebook

Human-Centered Artificial Intelligence: Research and Applications presents current theories, fundamentals, techniques and diverse applications of human-centered AI. Sections address the question, "are AI models explainable, interpretable and understandable?, introduce readers to the design and development process, including mind perception and human interfaces, explore various applications of human-centered AI, including human-robot interaction, healthcare and decision-making, and more. As human-centered AI aims to push the boundaries of previously limited AI solutions to bridge the gap between machine and human, this book is an ideal update on the latest advances.
  • Presents extensive research on human-centered AI technology
  • Provides different methods and techniques used to investigate human-AI interaction
  • Discusses open questions and challenges in trust within human-centered AI
  • Explores how human-centered AI changes and operates in human-machine interactions
LanguageEnglish
Release dateMay 15, 2022
ISBN9780323856492
Human-Centered Artificial Intelligence: Research and Applications

Related to Human-Centered Artificial Intelligence

Related ebooks

Teaching Methods & Materials For You

View More

Related articles

Reviews for Human-Centered Artificial Intelligence

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Human-Centered Artificial Intelligence - Chang S. Nam

    Human-Centered Artificial Intelligence

    Research and Applications

    Editors

    Chang S. Nam

    Edward P. Fitts Department of Industrial and Systems Engineering, North Carolina State University, Raleigh, NC, United States

    Jae-Yoon Jung

    Department of BigDataAnalytics /Department of Industrial andManagement Systems Engineering, Kyung Hee University, Yongin, Gyeonggi, South Korea

    Sangwon Lee

    Department of Interaction Science / Department of Human-Artificial Intelligence Interaction, Sungkyunkwan University, Seoul, South Korea

    Table of Contents

    Cover image

    Title page

    Copyright

    Contributors

    Foreword

    Preface

    Part I. Frameworks of explainable AI

    Chapter 1. Are AI models explainable, interpretable, and understandable?

    1.1. Artificial intelligence: human and thinking machine

    1.2. Explainability, interpretability, and understandability of AI

    1.3. Why XAI is needed?

    1.4. Categorization of XAI

    Chapter 2. Explanation using model-agnostic methods

    2.1. Introduction

    2.2. Marginal effect of input feature

    2.3. Contribution of each feature

    2.4. Surrogate models

    Appendix

    Chapter 3. Explanation using examples

    3.1. Introduction

    3.2. Category of example-based explanations

    3.3. Similarity-based methods

    3.4. Influence-based methods

    3.5. Case studies

    3.6. Summary

    Chapter 4. Explanation of ensemble models

    4.1. Introduction

    4.2. Ensemble models

    4.3. Challenges of explaining ensemble models

    4.4. Methods for interpreting ensemble models

    4.5. Conclusions

    Chapter 5. Explanation of deep learning models

    5.1. Introduction

    5.2. Activation-based models

    5.3. Backpropagation-based models

    Part II. User-centered AI design and development process

    Chapter 6. AI as an explanation agent and user-centered explanation interfaces for trust in AI-based systems

    6.1. Communication with computers: HCI and UX

    6.2. Being with friends: new rationality and trust in companion AI

    6.3. Explanation for trust: trustworthy AI and explainable AI

    6.4. Explanation for results: AI as an explanation agent and explanation interfaces

    Chapter 7. Anthropomorphism in human-centered AI: Determinants and consequences of applying human knowledge to AI agents

    7.1. Introduction

    7.2. Anthropomorphism: Using human knowledge for nonhuman targets

    7.3. Anthropomorphism in human–AI interaction

    7.4. Conclusion

    Chapter 8. Designing a pragmatic explanation for the XAI system based on the user's context and background knowledge

    8.1. Introduction

    8.2. Explanation for the XAI system

    8.3. Pragmatic explanation of van Fraassen

    8.4. Summary and conclusion

    Chapter 9. Interactive reinforcement learning and error-related potential classification for implicit feedback

    9.1. Introduction

    9.2. ErrP classification methods for implicit human feedback in RL

    9.3. Interactive reinforcement learning

    9.4. Discussion

    9.5. Conclusion

    Chapter 10. Reinforcement learning in EEG-based human-robot interaction

    10.1. Introduction

    10.2. The reinforcement learning problem

    10.3. Reinforcement learning in EEG classification

    10.4. Reinforcement learning using EEG in robot learning

    10.5. Conclusions

    Part III. Applications in human—AI interaction

    Chapter 11. Shopping with AI: Consumers' perceived autonomy in the age of AI

    11.1. Application of AI in advertising—its influence on consumers

    11.2. Prospects and concerns for AI-based advertising

    11.3. Challenges in AI-driven ads: a way to garner consumers' trust

    11.4. Implications

    Chapter 12. Use of deep learning techniques in EEG-based BCI applications

    12.1. The electroencephalogram and brain–computer interfaces

    12.2. Deep learning and EEGNet

    12.3. Preparing the environment

    12.4. Building and running the model

    12.5. Understanding the model

    12.6. Conclusions

    Chapter 13. AI in human behavior analysis

    13.1. Introduction

    13.2. Human behavior analysis using AI

    13.3. Sitting posture analysis using AI algorithms

    13.4. Conclusion

    Chapter 14. AI in nondestructive condition assessment of concrete structures: Detecting internal defects and improving prediction performance using prediction integration and data proliferation techniques

    14.1. Introduction

    14.2. Machine learning algorithms and applications

    14.3. Discussion and conclusions

    Part IV. Ethics, privacy, and policy in human—AI interaction

    Chapter 15. Ethics of AI in organizations

    15.1. Introduction

    15.2. What are the principles of ethical AI?

    15.3. Existing organizational theory

    15.4. Integrating ethical principles of AI with organizational theory

    15.5. Conclusion

    Chapter 16. Designing XAI from policy perspectives

    16.1. Introduction

    16.2. Two psychological concerns in AI

    16.3. Explainable AI

    16.4. Remaining technical and political issues

    16.5. Conclusion

    Chapter 17. Responsible AI and algorithm governance: An institutional perspective

    17.1. Introduction

    17.2. Fair machine learning

    17.3. Explainable machine learning

    17.4. Conclusion

    Author Index

    Subject Index

    Copyright

    Academic Press is an imprint of Elsevier

    125 London Wall, London EC2Y 5AS, United Kingdom

    525 B Street, Suite 1650, San Diego, CA 92101, United States

    50 Hampshire Street, 5th Floor, Cambridge, MA 02139, United States

    The Boulevard, Langford Lane, Kidlington, Oxford OX5 1GB, United Kingdom

    Copyright © 2022 Elsevier Inc. All rights reserved.

    No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or any information storage and retrieval system, without permission in writing from the publisher. Details on how to seek permission, further information about the Publisher’s permissions policies and our arrangements with organizations such as the Copyright Clearance Center and the Copyright Licensing Agency, can be found at our website: www.elsevier.com/permissions.

    This book and the individual contributions contained in it are protected under copyright by the Publisher (other than as may be noted herein).

    Notices

    Knowledge and best practice in this field are constantly changing. As new research and experience broaden our understanding, changes in research methods, professional practices, or medical treatment may become necessary.

    Practitioners and researchers must always rely on their own experience and knowledge in evaluating and using any information, methods, compounds, or experiments described herein. In using such information or methods they should be mindful of their own safety and the safety of others, including parties for whom they have a professional responsibility.

    To the fullest extent of the law, neither the Publisher nor the authors, contributors, or editors, assume any liability for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products, instructions, or ideas contained in the material herein.

    ISBN: 978-0-323-85648-5

    For information on all Academic Press publications visit our website at https://www.elsevier.com/books-and-journals

    Publisher: Nikki Levy

    Acquisitions Editor: Joslyn Chaiprasert-Paguio

    Editorial Project Manager: Barbara Makinster

    Production Project Manager: Swapna Srinivasan

    Cover Designer: Matthew Limbert

    Typeset by TNQ Technologies

    Contributors

    Miriam Tawiah Alabi,     Department of Industrial and Systems Engineering, North Carolina Agricultural and Technical State University, Greensboro, NC, United States

    Sanghyun Choo,     Edward P. Fitts Department of Industrial and Systems Engineering, North Carolina State University, Raleigh, NC, United States

    Emmanuel Akyirefi Dadzie,     Department of Industrial and Systems Engineering, North Carolina Agricultural and Technical State University, Greensboro, NC, United States

    Veljko Dubljević,     Department of Philosophy and Religious Studies, North Carolina State University, Raleigh, NC, United States

    Taehyun Ha,     Future Technology Analysis Center, Korea Institute of Science and Technology Information, Seoul, South Korea

    Jiali Huang,     Edward P. Fitts Department of Industrial and Systems Engineering, North Carolina State University, Raleigh, NC, United States

    Insil Huh,     Department of Human-Artificial Intelligence Interaction, Sungkyunkwan University, Seoul, South Korea

    Jae-Yoon Jung,     Department of Big Data Analytics, Department of Industrial and Management Systems Engineering, Kyung Hee University, Yongin, Gyeonggi, South Korea

    Sangyeon Kim,     Department of Interaction Science, Sungkyunkwan University, Seoul, South Korea

    Wonkyung Kim,     Department of Public Relations and Advertising, BNU-HKBU United International College, Zhuhai, China

    Wonjoon Kim,     Division of Future Convergence (HCI Science Major), Dongduk Women's University, Seoul, South Korea

    Yong Min Kim,     Department of Big Data and AI, Hoseo University, Asan, South Korea

    Younghoon Lee,     Department of Data Science, Department of Industrial Engineering, Seoul National University of Science and Technology, Seoul, South Korea

    Sangwon Lee,     Department of Interaction Science, Department of Human-Artificial Intelligence Interaction, Sungkyunkwan University, Seoul, South Korea

    Yushin Lee,     Major in Industrial Data Science & Engineering, Department of Industrial and Data Engineering, Pukyong National University, Busan, South Korea

    Chang S. Nam,     Edward P. Fitts Department of Industrial and Systems Engineering, North Carolina State University, Raleigh, NC, United States

    Sean M. Noble,     Department of Psychology, North Carolina State University, Raleigh, NC, United States

    Josue Obregon,     Department of Industrial and Management Systems Engineering, Kyung Hee University, Yongin, Gyeonggi, South Korea

    Chankyung Pak,     Department of Media and Communication, Kyungpook National University, Daegu, South Korea

    Hoonseok Park,     Department of Big Data Analytics, Kyung Hee University, Yongin, Gyeonggi, South Korea

    Donghyun Park,     Department of Big Data Analytics, Kyung Hee University, Yongin, Gyeonggi, South Korea

    Yujin Park,     Department of Human-Artificial Intelligence Interaction, Sungkyunkwan University, Seoul, South Korea

    Ilsun Rhiu,     Division of Future Convergence (HCI Science Major), Dongduk Women's University, Seoul, South Korea

    Young June Sah,     Department of Media & Entertainment, Sogang University, Seoul, South Korea

    Younho Seong,     Department of Industrial and Systems Engineering, North Carolina Agricultural and Technical State University, Greensboro, NC, United States

    Min K. Sim,     Department of Data Science, Department of Industrial Engineering, Seoul National University of Science and Technology, Seoul, South Korea

    Zachary Traylor,     Edward P. Fitts Department of Industrial and Systems Engineering, North Carolina State University, Raleigh, NC, United States

    Sun Yi,     Department of Mechanical Engineering, North Carolina Agricultural and Technical State University, Greensboro, NC, United States

    Myung Hwan Yun,     Department of Industrial Engineering, Seoul National University, Seoul, South Korea

    Foreword

    Artificial Intelligence (AI) remains a dominant topic within the contemporary scientific literatures and its relevance spans the gamut of social sciences, robotics, human–computer interaction, and computer science. Yet despite this burgeoning attention within the international research community, several challenges remain, including the essential element of human-centered AI. Commensurate with computational abilities and Machine Learning (ML), the human-centered aspects of AI design, test, use, and implementation are without a doubt some of the the greatest challenges facing humanity in the 21st century. How we design, test, use, and implement AI will impact humanity for generations following. Our success depends, in part, on how well we can integrate human-centered principles into this domain.

    Humans play a critical role across the lifecycle of AI through design, testing, evaluation, use, and broader societal acceptance/rejection. Humans design new AI algorithms and provide the inspiration for where AI can add value to humanity—or exploit it. Humans decide which training datasets are appropriate for ML applications as well as deciding the bounds for evaluating how much and what type of training is acceptable. Given the number of recent concerns regarding topics such as bias in AI, the training and testing of AI is certainly an area that can be improved. The successful use of AI will be driven, in part, by the degree to which the AI has been integrated into human task domains as either a tool or partner, and how well the AI can coordinate with the human users. Much of this involves how well the AI can communicate and coordinate with humans, the affordances for understanding the AI, and the interfaces through which humans and AI systems interact. Finally, humans hold the keys to acceptance or ultimate rejection of AI based on the values held by broader society.

    The editors should be congratulated for compiling this diverse and international set of chapters. The topic of human-centered AI is interesting, necessary, and pertinent to the future of humanity. We face epic challenges across the world in global climate change, instability, inequity, and limited resources. AI may help humanity address some (or all) of these challenges; yet, this success depends on how well human-centered principles are adopted by the broader AI community.

    Joseph B. Lyons, Ph.D.

    Principal Research Psychologist

    United States Air Force Research Laboratory

    Preface

    With the spread of artificial intelligence (AI), AI-based systems are increasingly spreading around our lives. It is no longer difficult to find AI all around us, such as smartphones, social media and shopping, home appliances, intelligent personal assistants, and autonomous driving. In order for humans to work together with AI-based systems in such environments, it is imperative to develop human-centered and human-friendly AI technologies. It is time to think over how much we understand the operation and results of AI, and how AI technology cares for humans. From this perspective, this book introduces the technology and applications of human-centered AI, and intends to discuss the future direction together. The interaction between users and computers is necessarily connected to information processing. AI mimics human brain and intelligent behaviors which can be explained in terms of human information processing. AI is based on computing power and ability, which make computers more intelligent. In interacting with AI-based systems, the criteria which people consider important are now heading for subjective and affective values beyond objective and rational values. As for this, transitioning from computer-centered to user-centered perspectives accounts for the important portion of AI development. Thus, the impact of human-centered AI is large; however, there are currently few books to provide students, practitioners, and researchers, including those outsides of academia, with a single, go-to source containing state-of-the-science information regarding human-centered AI. This book provides up-to-date coverage for researchers, students, and practitioners, including those with no formal training in human-centered AI, to be able to grapple with a synopsis of key findings and theoretical and technical advances from human-centered AI fields.

    This book is organized into four main parts. Part I, consisting of five chapters, provides a useful perspective on the paramount challenge of making AI models more explainable, interpretable, and understandable for human consumption (Chapter 1). If AI is to be effectively used in society, it must be interpretable and understandable by humans. The first section deals with AI explanation using model-agnostic methods (Chapter 2), example-based methods (Chapter 3), and model-specific methods for tree ensembles (Chapter 4) and deep learning (Chapter 5). Part II details methods for coordination between humans and AI through interfaces that can unlock the teaming potential between humans and AI-based systems, such as explanation interfaces for trust (Chapter 6), anthropomorphism in human-centered AI (Chapter 7), a pragmatic explanation for the XAI system based on the user's context and background knowledge (Chapter 8), use of implicit feedback with interactive reinforcement learning (Chapter 9), and reinforcement learning in EEG-based human–robot interaction (Chapter 10). Part III presents five chapters that focus on near-term tangible applications of AI which can help the reader understand where and how AI might be applied: Consumers (Chapter 11), Brain–Computer Interfaces (Chapter 12), human behavior analysis (Chapter 13), and human decision aids (Chapter 14). Finally, the book considered the broader societal issues that will certainly shape the acceptance or rejection of AI, such as ethics of AI in academic and public discourse (Chapter 15), AI designing from policy perspectives (Chapter 16), and AI Governance (Chapter 17). Factors such as ethical considerations in the development and use of AI are both timely and necessary for this topic to advance the opportunity for implementing AI.

    The current book offers readers an intellectually stimulating collection of chapters focused on some of the critical human-centered AI issues. Thus, we hope that our readers find the information presented in this book timely and useful in guiding their research. On behalf of the editorial team, we would sincerely like to thank the contributing authors for their professionalism as well as commitment to the success of this book.

    Chang S. Nam

    Jae-Yoon Jung

    Sangwon Lee

    Part I

    Frameworks of explainable AI

    Outline

    Chapter 1. Are AI models explainable, interpretable, and understandable?

    Chapter 2. Explanation using model-agnostic methods

    Chapter 3. Explanation using examples

    Chapter 4. Explanation of ensemble models

    Chapter 5. Explanation of deep learning models

    Chapter 1: Are AI models explainable, interpretable, and understandable?

    Jae-Yoon Jung ¹ , ² , and Donghyun Park ¹       ¹ Department of Big Data Analytics, Kyung Hee University, Yongin, Gyeonggi, South Korea      ² Department of Industrial and Management Systems Engineering, Kyung Hee University, Yongin, Gyeonggi, South Korea

    Abstract

    Artificial intelligence (AI) has been evolved for the last 50 years. The recent AI techniques, machine learning (ML) and deep learning, have been successfully established in many fields such as recommendation, computer vision, machine translation, social media, and system diagnostics. The highly performing AI models, such as ensembles and neural networks, overwhelmed the prediction performance of traditional AI techniques such as symbolic and logic-based expert systems, but instead they turned to be increasingly difficult for humans to interpret and understand the model behavior. Yet, the explain ability of AI and the interpretability of ML are requisite when the AI-based systems are to be adopted for critical decision making in real world. How can you trust them for important decisions, if you cannot understand how AI works? Unfortunately, the systematic methods for the explainable AI (XAI) are not yet mature even in academia. This chapter will introduce the meaning of XAI, as well as the category of AI explanation and interpretation methods. In addition, the applications of XAI and the related important issues will be discussed.

    Keywords

    Artificial intelligence; Deep learning; Explainable AI (XAI); Machine learning

    1.1. Artificial intelligence: human and thinking machine

    Artificial intelligence (AI) has recently been recognized as one of the most promising technologies for revolutionizing human life. AI can be defined as a computing technology that mimics or transcends human intelligence such as sensing, inference, learning, and cognition to solve problems in fields such as recommendation, translation, autonomous driving, robotics, drawing and composing, and games. With the birth of computers in the mid-20th century, the concept of Thinking Machine began to be explored, and the term artificial intelligence (AI) was first used at the Dartmouth Workshop in 1956. In the early days of AI technology, it was more like a logic-based symbolism that humans can understand rather than black boxes like machine learning and deep learning that we are facing recently. Due to the low computational capacity and basic logic, the performance was much lower than it is now, but it was a more human-friendly technology in which the logic could be easily understood and controlled by humans.

    AI-based systems, similarly to humans, can be composed of three large parts that (1) process sensory input such as visual, auditory, and tactile data, (2) perform thinking, such as cognition, inference, and learning, and (3) actuate motion such as robots or actuators, or generate natural language such as speech or writing, as depicted in Fig. 1.1.

    Figure 1.1  Concepts of artificial intelligence systems.

    Recent advances in AI have improved all the three parts of the AI-based system. In the processing of input data, while traditional machine learning required hand-crafted feature extraction, deep learning technology can learn features directly from data without prior domain knowledge through representation learning. Thinking has evolved from the symbolic AI such as logic, rule, and expert systems to machine learning and deep learning, which can learn complex patterns from data. AI can solve not only typical problems such as regression, classification, clustering, and association rule learning, but also generate new models similar to the original data in some fields such as drawing, writing, and composition (e.g., AI art). In the output part, human natural languages are learned and imitated (e.g., chatbots service and deepfake video), or even human control is mimicked to control the real-world systems (e.g., acceleration and brake systems of autonomous driving), as presented in Fig. 1.2.

    Figure 1.2  Relationship among artificial intelligence, machine learning, and deep learning.

    As machine learning and deep learning techniques corresponding to the human brain are gradually developed, the performance of AI is rapidly developing. For example, in the field of question answering systems, IBM Watson already defeated the human ability to answer questions (question-answering) at the Jeopardy! Quiz Show in 2011, and in the field of image processing, the accuracy of deep learning in image classification problems has already surpassed human accuracy. ResNet, the winner of ILSVRC (The ImageNet Large Scale Visual Recognition Challenge) 2015, achieved the classification errors of 3.6% for the ImageNet database, which was less than the human classification errors of about 5%. In natural language processing, Google Translator lowers the barrier between languages by providing much faster acceptable machine translation between numerous languages (e.g., English, French, Chinese, etc.) incomparable to a single human being. Google DeepMind's AlphaGo achieved an overwhelming victory over 9-dan pro Go player Lee Sedol in 2016 in Go, which is known as the most difficult game that human enjoys. Google DeepMind's AlphaFold solved the protein structure problem by a large score difference between existing bioengineering researchers. The problem was a grand challenge in biology for the past 50 years.

    Despite its great recent performance advances in those large-scale complex problems, deep learning models using too many network layers have resulted in much more complexity and less understandability and interpretability. The high performing AI models are getting harder for human to understand whether the AI's internal algorithm has been properly trained, whether there are partial malfunctions or misjudgments, and in which cases the judgment result will change. In the AI-based systems, all the three parts have become increasingly complex. For instance, in deep learning models, we have (1) the task of processing input data (e.g., feature representation), (2) the mechanism of judgment in the thinking part, and (3) the task of generating outputs (e.g., deepfake speech).

    The reason is that advanced machine learning techniques such as deep learning and ensemble trees are used with extremely many parameters in more complex structures to learn accurate and reliable results. Fig. 1.3 shows the relationship between performance and complexity of representative machine learning techniques. The higher the performance, the more complex the model structure and the number of model parameters to be learned increase, making it difficult for humans to interpret. For example, an ensemble tree such as random forest often summarizes the results of 1000 decision trees in which each tree may contain hundreds of decision rules. Table 1.1 shows the complexity increase and error decrease of the convolutional neural network (CNN) models that ranked high in the ILSVRC challenges since 2012.

    Figure 1.3  Trade-off between accuracy and interpretability of machine learning algorithms.

    Table 1.1

    In order to trust and use AI to be applied to crucial decision making in reality, it is requisite to understand the decision criteria of the trained machine learning and deep learning models, and at least to be able to confirm that there is no serious flaw. Unfortunately, complex machine learning and deep learning techniques have different model structures, perform learning in various ways, and use different types of parameters, so it is not easy to develop a general way to understand a trained model. Furthermore, explaining the operation of highly learned machine learning and deep learning models to nonprofessionals is a challenge that requires further research in the viewpoint of cognitive science.

    In this book, the process and technique of explaining the learning-based AI models in various ways so that people can understand is called XAI. In particular, it is called human–AI interaction to smoothly design a two-way interaction so that AI can learn well in the direction desired by humans and also humans can understand the behavior and results learned by AI. In particular, the other chapters of Part I introduce several methods for the narrow meaning of XAI, a technique that shows an AI model learned by an AI algorithm to humans.

    1.2. Explainability, interpretability, and understandability of AI

    With the rapid growth of AI technology, the concept of XAI has been considered recently in broad fields. In this section, the concept and definition of XAI are first introduced, and the conceptual difference between similar terms such as explainability, interpretability, and understandability is then discussed.

    In the early era of AI research, XAI did not attract a great interest because the performance of AI was more attractive to improve than XAI in order to apply AI to practical problem solving. Moreover, the symbolic AI, such as expert systems, is relatively more comprehensible than modern AI because of its innate design and mechanisms of expressing human knowledge as logic and rules.

    The concept of explainability was initially considered from intelligent tutoring systems and expert systems (Mueller et al., 2019). Yet, back then, the concepts only meant telling rules of the systems. The systems were not properly explained and interpreted by only the rules on how inputs are converted into outputs in the systems. The rules only justified how the systems gave the results not the explanation or interpretation. In the computer science field, when the input–output process is clearly explained in a logical or statistical way and it is also right, the process is considered as explainable. However, much more complicated systems these days such as machine learning models and deep neural networks are hard to be explained with the simple logical or statistical way.

    The concept of XAI was first introduced by van Lent et al. (2004), considering the ability of the system to describe the behavior of AI-controlled objects in simulation game applications (Adadi & Berrada, 2018; Lent et al., 2004). But the spread of XAI began as the DARPA (Defense Advanced Research Projects Agency), the military research agency of the United States, led the XAI research with massive government funding. DARPA said that the goal of XAI is developing explainable models containing high learning performances (Gunning et al., 2019). In general, the higher explainability and the lower model performances are shown. So, researchers are trying to make novel XAI methods while maintaining high explainability without losing models' performances.

    The definition of XAI has been suggested by many studies; however, there is not yet a general definition. Adadi and Berrada (2018) said that the XAI is not just a technology term, but it is closer to a movement, initiative, and efforts for the transparency of AI and concerns in trustworthy AI technology. In this definition, they assume that an interpretable system is regarded as explainable when the behavior of the system is understandable for humans. Although there are several terms such as understandable, comprehensible, or intelligible, these terms are being used differently according to the user's expertise or background knowledge.

    The concepts of interpretation and explanation are discriminated by Montavon et al. (2018). The interpretation is a process mapping abstract concepts to the domain which humans can understand, while the explanation means the set of features that affect the one example. In other words, the explanation can tell which features affected the result of the AI model with respect to an example that is one instance of a sample dataset.

    In addition, the concept of completeness as well as the explainability and interpretability was also introduced by Gilpin et al. (2018). According to the study, the explanation of AI models can summarize the reasons why the neural network models give these kinds of results. This helps the users to have trust in their models and make insights in the cause of the decision made by the AI models. And they suggested that the explanation of AI models is evaluated in two ways: interpretability and completeness. The interpretability measures how much the system is described in a good way that humans can understand. The completeness measures how accurately the system is described. So, it is important to balance the two trade-off concepts because the degree of explanation becomes less accurate as the system is described in a more interpretable way.

    The XAI helps to manage the development of an intelligent partner, make it understandable, and have trust in it. Also, the XAI gives the answer for a question about what data led to the results and how users can draw the same results when the exact same data and situation are given (Shickel et al., 2019).

    1.3. Why XAI is needed?

    The explanation and interpretation of AI models can be leveraged for many purposes. Adadi and Berrada (2018) clearly summarize four actions in which XAI can help for commercial benefits, for ethical concerns or for regulatory considerations. They say the AI models should be explained to justify, control, improve, and discover: (1) XAI can be used to explain the reason for a particular outcome to justify the AI model, (2) XAI helps to identify and correct errors to control the model, (3) users can utilize the XAI to improve the AI model, and (4) users can discover new facts and knowledge from the AI model.

    In the real world, humans explain things for a variety of reasons. However, in this section, we assumed AI models are a kind of problem solvers for human, and then what we want to ask the XAI could be summarized in six following questions:

    Is it solving the problem correctly? Many inductive approaches based on data have difficulty in finding solutions in a right way because correct guides are not given to the initial states of learning solutions. For example, a saliency map, which is an XAI method showing important area in an image, helped to find that the Fish vector method had classified many horse pictures in an incorrect way of detecting the copyright tags, not learning the contour of horses (Lapuschkin et al., 2016). In the cases where CNN models are trained with EEG signals for brain–computer interaction tasks, the areas of the brain that are activated for a specific motion imagery can be compared with prior knowledge on brain science to check whether the CNN model is working correctly.

    How will the answer to that question changes? People want to know how the prediction results change when major factors change. Basic XAI methods such as Partial Dependence Plots and Individual conditional Expectation plots can provide graphically the change in results according to the change of one or two factors. Moreover, one can check whether the corresponding classification value changes by changing the value of a specific column to another test value or applying perturbation or permutation of the column values.

    Is it solving the problem trustfully? Data-based techniques are very vulnerable to errors or misuse of data. XAI can give trust on the results of AI systems. There are specific circumstances where strong assurances on the AI systems are necessary, for instance, life-related events such as medical decisions or military decisions. At the time that doctors make a decision by using the AI system whether the doctors operate high risky treatments for cancer patients, they

    Enjoying the preview?
    Page 1 of 1