User (Student) Models
The student model represents the student’s knowledge relative to the
domain model, in both general and situation-specific forms [8]. The
student model may be constructed using the overlay method (student model
is a subset of the expert domain model), the misconception/bug method
(student behavior is matched against variants/incorrect domain model),
or a machine learning method [17].
Some systems can also infer the student’s ongoing approach to solving a
problem, such as a diagnostic strategy. ITS programs infer the student’s
model by asking (e.g., “Do you believe X causes Y?”) or by
interpreting reasoning steps. Misconceptions can also be pre-enumerated
in the program and available for matching against student behavior
[23]. The creation of models of student knowledge and reasoning
remains an ongoing concern in ITS research.
Given the focus and challenge of adding an explanation capability to the
machine learning programs, an early assumption in the XAI program was
that requiring researchers to also incorporate a student model was a
bridge too far. But XAI research conducted to date has been a reminder
that explanations need to be tailored—somehow—to the knowledge and
goals of the user. It is certainly unacceptable to assume that that the
user’s understanding of the task is the same as that of the researchers
[19]. Furthermore, only XAI research that utilized post-experimental
cognitive interviews shows the kind of awareness this research requires.
On the other hand, the neural network learning method addresses
perceptual cognition, which symbolic AI, the representational foundation
of ITS research, finessed. When images are involved, they are usually
presented to the student as text, in terms of already abstracted
categories (e.g., the morphology of cultured organism is a “rod”).
When the focus is image interpretation itself (e.g., x-ray
interpretation, [18]), manually annotated images are presented to
the student (e.g., [14]). MR Tutor [22] is a relevant exception
in the domain of Magnetic Resonance Imaging (MRI). Experts used a
predefined ontology of features to label images, and neural network
learning was used to relate patient cases. The resulting “typicality”
model enabled the student to view the distribution of disease features
across cases, and for the tutoring program to select appropriate
problems and examples from the library ([22], pp. 5–8). However, MR
Tutor explanations are limited to relating cases, rather than
explicating the underlying causal processes that give rise to the
observed morphologies—a capability required by specialists for
recognizing and discriminating atypical manifestations of a disease.
Studies of radiological expertise revealed an ability to rapidly and
automatically recognize “varied normal anatomy” coupled with an
ability to describe “abnormal appearance” ([22], p. 3-4). By
extension, use of neural network tools for practical applications, a
primary objective of XAI research, may require users to have similar
capabilities to distinguish discrepant features of interest from normal
variation in appearance. Training—and by implication
explanation—could be oriented accordingly by presenting normal and
abnormal examples and ordering them within a cognitively justified
instructional strategy, that is, a pedagogical method.