Explanation and Connectionist Systems

TitleExplanation and Connectionist Systems
Publication TypeTechnical Report
Year of Publication1989
AuthorsDiederich, J.
Other Numbers515
Abstract

Explanation is an important function in symbolic artificial intelligence (AI). For example, explanation is used in machine learning and for the interpretation of prediction failures in case-based reasoning. Furthermore, the explanation of results of a reasoning process to a user who is not a domain expert must be a component of any inference system. Experience with expert systems has shown that the ability to generate explanations is absolutely crucial for the user-acceptance of AI systems (Davis, Buchanan & Shortliffe 1977). In contrast to symbolic systems, neural networks have no explicit, declarative knowledge representation and therefore have considerable difficulties in generating explanation structures. In neural networks, knowledge is encoded in numeric parameters (weights) and distributed all over the system.It is the intention of this paper to discuss the ability of connectionist systems to generate explanations. It will be shown that connectionist systems benefit from the explicit encoding of relations and the use of highly structured networks in order to realize explanation and explanation components. Furthermore, structured connectionist systems using spreading activation have the advantage that any intermediate state in processing is semantically meaningful and can be used for explanation. The paper describes several successful applications of explanation components in connectionist systems which use highly structured networks, and discusses possible future realizations of explanation in neural networks.

URLhttp://www.icsi.berkeley.edu/pubs/techreports/tr-89-16.pdf
Bibliographic Notes

ICSI Technical Report TR-89-016

Abbreviated Authors

J. Diederich

ICSI Publication Type

Technical Report