Jim Mason
2 min readFeb 9, 2021

--

Image by author

Thanks for your fine article! As one who has been working on models of natural language understanding for decades, I believe our brains have developed special networks for bottom-up sequential parsing to deal with those long-range dependencies. I'm not alone in that; other computational linguists have also built many kinds of sequential parsers, but the models I use can easily be imagined to be simplified kinds of neural networks. See my web site

http://www.yorku.ca/jmason/asdindex.htm

for more details.

The problem with my Augmented Syntax Diagram grammars, and the semantic/pragmatic processes with which they are augmented, is that it's slow and hard work to construct them manually. It would be a great improvement if they could be learned inductively the way our brains are apparently able to do.

As for attention, it is an important and interesting component of a Card World model that I have been constructing to illustrate English-language understanding in a conversational context that also involves gesture and manipulation of non-linguistic objects (playing cards). A good model of attention is crucial to use and understanding of pronouns and articles “this”, “that”, “these”, “those”, “a”, “the”, “some”, “all”, and so on, either combined with pointing gestures or with memory for things recently attended to. I have learned a lot about mental attention by trying to build such a model that mimics human language behavior well, and, as you said in your article, it requires both top-down and bottom-up components.

See my web site (above) and my Medium article, "A Computer Program That Exhibits Consciousness" for more:

https://medium.com/datadriveninvestor/a-computer-program-that-exhibits-consciousness-964ab03f61e5

--

--

Jim Mason
Jim Mason

Written by Jim Mason

I study language, cognition, and humans as social animals. You can support me by joining Medium at https://jmason37-80878.medium.com/membership

No responses yet