Publication Details
Title: Learning Feature-Based Semantics with Simple Recurrent Networks
Author: A. Stolcke
Group: ICSI Technical Reports
Date: April 1990
PDF: ftp://ftp.icsi.berkeley.edu/pub/techreports/1990/tr-90-015.pdf
Overview:
The paper investigates the possibilities for using simple recurrent networks as transducers which map sequential natural language input into non-sequential feature-based semantics. The networks perform well on sentences containing a single main predicate (encoded by transitive verbs or prepositions) applied to multiple-feature objects (encoded as noun-phrases with adjectival modifiers), and shows robustness against ungrammatical inputs. A second set of experiments deals with sentences containing embedded structures. Here the network is able to process multiple levels of sentence-final embeddings but only one level of center-embedding. This turns out to be a consequence of the network's inability to retain information that is not reflected in the outputs over intermediate phases of processing. Two extensions to Elman's shortcite{Elman:88} original recurrent network architecture are introduced.
Bibliographic Information:
ICSI Technical Report TR-90-015
Bibliographic Reference:
A. Stolcke. Learning Feature-Based Semantics with Simple Recurrent Networks. ICSI Technical Report TR-90-015, April 1990
Author: A. Stolcke
Group: ICSI Technical Reports
Date: April 1990
PDF: ftp://ftp.icsi.berkeley.edu/pub/techreports/1990/tr-90-015.pdf
Overview:
The paper investigates the possibilities for using simple recurrent networks as transducers which map sequential natural language input into non-sequential feature-based semantics. The networks perform well on sentences containing a single main predicate (encoded by transitive verbs or prepositions) applied to multiple-feature objects (encoded as noun-phrases with adjectival modifiers), and shows robustness against ungrammatical inputs. A second set of experiments deals with sentences containing embedded structures. Here the network is able to process multiple levels of sentence-final embeddings but only one level of center-embedding. This turns out to be a consequence of the network's inability to retain information that is not reflected in the outputs over intermediate phases of processing. Two extensions to Elman's shortcite{Elman:88} original recurrent network architecture are introduced.
Bibliographic Information:
ICSI Technical Report TR-90-015
Bibliographic Reference:
A. Stolcke. Learning Feature-Based Semantics with Simple Recurrent Networks. ICSI Technical Report TR-90-015, April 1990
