Pedagogy
Some developers of XAI systems have recognized the need for XAI systems to have a pedagogical foundation (e.g., Raytheon/BBN and Rutgers projects). However, most XAI programs don’t base explanations on an explicit model of the instructional process involving structured methods of interaction, which in turn is based on a theory of learning. By analogy to ITS, an XAI program should incorporate a model for evaluating and instructing proper use of the associated AI program.
Early ITS systems that incorporated pedagogical models include Guidon [6, 9] and Meno-Tutor [15, 26]. The RadTutor [2] for diagnostic interpretation of mammogram images is based on instructional principles (multiplicity, activeness, accommodation and adaptation, and authenticity) and methods (including modelling, coaching, fading of assistance, structured problem solving, and situated learning).
The designers of MR Tutor formulated the following requirements for a computer system to train people in image processing (quoted from [22], p. 4):
Also applicable to image categorization in general is the idea of a domain-specific description language. In MR Tutor, the Image Description Language (IDL) included functional descriptors (e.g., lesion homogeneity, lesion grouping, interior patterning), and image features (e.g., visibility, location, shape, size, intensity).
We hypothesize that analogous feature categories and feature descriptions are used by people for interpreting images in general, either formally as standards within a community of practice, or informally by individuals developing their own conscious method for interpreting and classifying images. The use of such feature languages in a variety of domains suggests that comprehending and trusting AI program interpretations, a primary objective of XAI systems, requires an image description language that conforms to the natural language used in the domain.
Furthermore, instructional research based in cognitive studies suggests that the chain model:
[XAI generates explanations
User comprehends the explanations
User performance improves]
is far too simple—it ignores the active aspect of learning, especially self-explanation. Self-explanation improves learning whether it is prompted or self-motivated [4. 5, 20]. In general, XAI programs do not facilitate self-explanation. Initial instructions given to participants provides explanatory material and may support the self-explanation process; but not all XAI projects provide such instructions. Although some of the projects present examples and tasks that permit displaying boundary conditions (e.g., what the AI gets wrong, false positives), placing the user in a self-explanation mode, XAI methods have not generally exploited the user’s active efforts to construct an explanation of the AI system.

CONCLUSION

Some scientific contributions are common to XAI and ITS research. Both seek to promote people’s learning through automated interaction and explanation. Both represent processes as formal models and algorithms in a computer program, in application domains relevant to DoD concerns. Both have found that explanations are more productive when people can respond to them interactively (e.g., by asking follow-up questions), involving theories about when and what kind of explanations facilitate understanding. Researchers in both areas also recognize the need for pilot studies to evaluate the instructional methods and procedures for assessing user understanding.
There have also been contributions of XAI that were not incorporated in the ITS work. Through the use of a symbolic problem-solving model (the embedded expert system), many ITS programs can solve new cases, but for pedagogical effectiveness, most use a curriculum of solved problems curated and organized by specialists (i.e., a “case library”), based on an ontology that has been established within the technical domain (e.g., MR Tutor [22]). It would be advantageous to couple the MR Tutor’s ability to relate cases with the ability of neural network systems to add solved cases to the library.
Another advance is the concern in XAI research with the development of appropriate trust and reliance. Research has demonstrated, for instance, that global explanations alone do not promote trust [19]. ITS research usually focused on teaching people to solve problems themselves, rather than teaching them how to use an AI program that assists them in carrying out complicated technical activities.
In conclusion, the objective of the XAI research program—to develop computational aids to promote practical use of an AI tool, including promoting a user’s understanding of the system’s capabilities and vulnerabilities in practical situations—is inseparable from the objectives of ITS research involving domains of professional expertise, such as medicine, electronics troubleshooting, and engineering. We described the principles of ITS design, in which an explicit pedagogical strategy is based on a cognitive theory of learning in the domain of interest, which is expressed in a model of the subject material. That is, in ITS the design of “explanation systems” is guided by a well-developed scientific framework , formalized in process models of problem solving, learning, and communication. We conclude that it will be productive for XAI researchers to view “explanation” as an aspect of an instructional process in which the user is a learner and the program is a tutor, with many of the attendant issues of developing a shared language and understanding of problem-solving methods that ITS research has considered over the past 50 years.