Publication Details

Title: Explanation and Connectionist Systems
Author: J. Diederich
Group: ICSI Technical Reports
Date: April 1989
PDF: http://www.icsi.berkeley.edu/pubs/techreports/tr-89-16.pdf

Overview:
Explanation is an important function in symbolic artificial intelligence (AI). For example, explanation is used in machine learning and for the interpretation of prediction failures in case-based reasoning. Furthermore, the explanation of results of a reasoning process to a user who is not a domain expert must be a component of any inference system. Experience with expert systems has shown that the ability to generate explanations is absolutely crucial for the user-acceptance of AI systems (Davis, Buchanan & Shortliffe 1977). In contrast to symbolic systems, neural networks have no explicit, declarative knowledge representation and therefore have considerable difficulties in generating explanation structures. In neural networks, knowledge is encoded in numeric parameters (weights) and distributed all over the system. It is the intention of this paper to discuss the ability of connectionist systems to generate explanations. It will be shown that connectionist systems benefit from the explicit encoding of relations and the use of highly structured networks in order to realize explanation and explanation components. Furthermore, structured connectionist systems using spreading activation have the advantage that any intermediate state in processing is semantically meaningful and can be used for explanation. The paper describes several successful applications of explanation components in connectionist systems which use highly structured networks, and discusses possible future realizations of explanation in neural networks.

Bibliographic Information:
ICSI Technical Report TR-89-016

Bibliographic Reference:
J. Diederich. Explanation and Connectionist Systems. ICSI Technical Report TR-89-016, April 1989