Member-only story

Shallow Versus Deep AI

Jim Mason
4 min readApr 22, 2021

--

We no longer have to choose between breadth of coverage and depth of understanding

Photo by Possessed Photography on Unsplas

The dream of building intelligent computer systems goes back to the early days of computing. In 1959, for example, a General Problem Solver system was proposed and partly implemented, whose authors had great ambitions for it. (See https://en.wikipedia.org/wiki/General_Problem_Solver .) In the 1960s already there were similar ambitions for computer systems that would be able to do general Natural Language Understanding or perform Machine Translation between natural languages. Most of those efforts achieved some partial results but had nowhere near the success that their designers ambitions led people to expect.

Of course, in those days computers were far less powerful than they are now; they were much slower and had much more limited memory capacity, both for high-speed working memory and slower-speed permanent memory. People who attempted to build “intelligent” computer systems had to make trade-offs in terms of computer time and computer memory usage. They could try to use less permanent memory space by relying more on algorithms than on stored data, or they could try to use less processing time and working memory by relying more on stored data.

Even today, with vast improvements in both speed and memory of computer systems, designers of AI systems can rely more on huge…

--

--

Jim Mason
Jim Mason

Written by Jim Mason

I study language, cognition, and humans as social animals. You can support me by joining Medium at https://jmason37-80878.medium.com/membership

Responses (1)