There have been three main reasons for this increase in interest: 1. Scientific adequacy of the models 2. The availability of fine-grained parallel hardware to run the models 3. The demonstration of powerful connectionist learning models. The scientific adequacy of models based on a small number of coarse-grained primitives (. conceptual dependency), popular in AI during the 70's, has been called into question and substantially replaced by a current emphasis in much of computational linguistics on lexicalist models (., ones which use words for representing concepts or meanings). However, few people can doubt that words are too coarse, that.