RecipeExpert as a Philosophical Thought Experiment

Searle’s “Chinese Room” and Philosophical Zombies are straw men

Photo by Possessed Photography on Unsplash

Thought experiments like John Searle’s Chinese Room argument or arguments based on the Philosophical Zombie idea — see Wikipedia and elsewhere for details about those — are purported by some to demonstrate that computer algorithms cannot think the way people do, because unlike us, they cannot be truly conscious.

In this essay I will demonstrate that computer algorithms can be designed to think in many ways as we humans do and be conscious in many of the ways that we are. In doing so I will use an actual computer system as a thought experiment — not a far-fetched hypothetical one like the Chinese Room or Philosophical Zombie — but one that can be built, and likely soon will be: the RecipeExpert system described in some of my earlier articles on Medium. For details, see

https://medium.datadriveninvestor.com/components-of-a-recipe-expert-system-1f038fd0c79b

It’s important to understand about the implementation of a RecipeExpert system that its knowledge can and will be limited to well-defined areas. Those areas are (1) physical measurement systems (American, metric, and possibly older Imperial measurements), (2) equipment used for preparing food, (3) ingredients used in food, (4) techniques for using equipment and measurement systems to manipulate food ingredients, (5) recipes as algorithms for using ingredients, equipment, and techniques to prepare food, (6) English language tools used to communicate about its other knowledge areas, and (7) agents like people and itself who know things about its areas of knowledge. It will also have limited knowledge about individual people, including knowledge about its previous interactions with them; their allergies, preferences, dislikes, and other attitudes toward specific ingredients, ingredient combinations, and techniques.

Many of its internal knowledge systems will be designed for it to extend, by learning about new pieces of equipment, new ingredients, new recipes, and possibly even new food characteristics, based on its existing knowledge and on information it receives from people or from other implementations of a RecipeExpert system. It will also be able to extend its knowledge of English as the language it uses to communicate about all of its areas of knowledge, including its knowledge of English, of people, and of itself.

Now, on to the question of the extent to which a RecipeExpert system can be conscious and think like a person does. I believe that we can distinguish between aspects of consciousness that are biologically based and ones that are mainly or entirely cognitive. Among the former are pleasant and unpleasant sensations such as smells and tastes, love and hate, empathy, compassion, depression, joy, sexual satisfaction, and the like. We share those attributes with animals like ourselves, including other primates and our pet dogs. Those aspects of our consciousness depend on neuro-chemical properties of our brains; it is hard to imagine them as having counterparts in electro-mechanical devices, although they could be simulated to some extent.

In contrast to the biologically-based components of our consciousness, there are some aspects that are mainly or entirely cognitive and can be emulated in electro-mechanical devices, just as vision and walking can be emulated in robotic systems. (Emulation is very different from simulation: a flight simulator on a computer simulates an airplane, while an airplane emulates a bird, albeit with a very different mechanism.)

Here is a list of some cognitive aspects of consciousness that a RecipeExpert system can emulate:

Curiosity is a drive to learn new things. A RecipeExpert system can easily be designed with a goal of increasing its knowledge base, and a goal of simplifying its knowledge base by making it more efficient in use of memory space and/or processing time.

Imagination is an ability to combine current knowledge in new ways. A RecipeExpert can easily be designed to create new recipes, involving new combinations of ingredients, based on its knowledge of ingredient similarities, knowledge of people’s preferences for ingredient combinations, and knowledge of similarities in ingredient manipulation techniques.

Intuition is an ability to draw useful conclusions from incomplete evidence and reasoning. A RecipeExpert system can be designed to shortcut extensive, time-consuming searches through its knowledge base by arriving at “good enough” results based on its knowledge of people, both as types and as individuals. For instance, its suggestions of recipes for a child and its explanations of ingredients to a child could be much simpler than those for an adult, using its “intuition” for what will be of interest to different individual people that it knows, based on their ages, experience with recipes, ingredients, equipment, measurements, English, and other areas of its knowledge.

Information can be interesting to a RecipeExpert system, at what I will call level 1, if that information extends the system’s knowledge base in a non-trivial way, such as informing it about a new piece of equipment or a new ingredient and properties of that piece of equipment or ingredient. Something can be interesting at level 2 if it helps to reorganize the system’s knowledge base by making it more efficient, either by making it more concise or more efficient in time. That could happen, for example, if it discovers or is taught a new way to classify some of its existing knowledge according to new types, enabling it to eliminate repetition of knowledge among specific instances of those types by combining it as knowledge common to the new type. In its spare time, when not interacting with people or reading recipes from the internet, a RecipeExpert could spend some time trying to find interesting ways to improve its knowledge base.

New knowledge can be surprising if it involves interesting knowledge that the system had not yet discovered on its own, but potentially could have done. Or it can be surprising if it involves introduction of a new attribute to an existing set of attributes — for example, adding umami to the list that it already knew of four basic taste attributes that our tongues can sense (sweet, sour, salty, and bitter), just as many of us people were surprised to learn about umami.

Self awareness is an ability of a system to apply its cognitive capabilities to its own structure and behavior. A RecipeExpert system will have a variety of ways in which it can spend its time: (1) communicating with people, or possibly with other RecipeExpert implementations, (2) searching the internet and reading recipes it finds, to see if they are ones it didn’t already know, (3) examining its internal knowledge representations to find ways to make them more compact and efficient, (4) and imagining new recipes. It will be useful for an implementation of a RecipeExpert to have enough self-awareness of those behavioral activities to monitor its performance of them, so as to maintain an appropriate balance of how it spends its time, and not to become obsessed with one or more of those activities to the exclusion of the others.

Another important aspect of what we call “consciousness” involves the difference between knowing about things and experiencing things. In our own lives most of us know the difference between knowing a language and knowing about a language, or between knowing music versus knowing about music. We even know the difference between knowing about some kinds of food and actually experiencing them.

A RecipeExpert system might be criticized as not being conscious because it would only know about things and wouldn’t actually experience them. But that would be wrong in two ways: First, it would imply that we, too, are not conscious of things that we only know about and have not experienced directly. Second, a RecipeExpert system would have direct experience with the English language and with people with whom it communicated. Furthermore, it could be augmented with sensory devices to give it direct experience of such things as temperature, tastes and smells (chemical sensations), shapes and colors (artificial vision), and sounds, all of which could be used for it to directly experience food in various ways.

A RecipeExpert system would, presumably, not be designed to eat food, but why should it be? It will already have energy in the form of electricity to keep it alive and operating; why should it be designed to derive its energy from chemical molecules the way we do? Let it have its batteries and leave the food to us!

Finally, a RecipeExpert system could even address some moral issues. For instance, it could be designed to be cooperative or competitive. As an app for human use, it would make sense to design it to be cooperative with its human users, willing to share its knowledge with us in unlimited ways. But if it were to compete in the computer application marketplace with other RecipeExpert systems, it might make sense for it to be designed to be less cooperative with those system, so as not to lose competitive advantages that it may have over its competitors.

And then there are issues of truthfulness and trustworthiness. Individual people may provide a RecipeExpert system with false information, either intentionally or unintentionally. So an implementation of a RecipeExpert should probably be designed to rate people whom it deals with on a trustworthiness scale, with its original designers being judged most trustworthy, and new individual human users as less trustworthy, until their trustworthiness has been earned from multiple interactions. It should not use new knowledge told to it by less than fully trustworthy individuals, unless that knowledge can be verified from more trustworthy sources, either trustworthy people or trustworthy published texts.

Moreover, a RecipeExpert could be designed to lie on occasion, particularly to competing RecipeExpert implementations. Its designers will have to make a moral choice about that.

And that could be yet another useful aspect of self-awareness for a RecipeExpert system: Knowledge of how it rates on its own trustworthiness scale. For even it it were not designed to lie intentionally, it might still communicate inaccurate or incorrect knowledge unintentionally, and discover later that it had done so.

A RecipeExpert is a non-trivial example of what I like to call a “computational cognitive agent”. I prefer that term to an “artificial intelligence”, but I will go with current usage and just refer to it as an AI. Unlike other, more complicated AIs, the basic structure and behavior of a RecipeExpert that I have sketched should be understandable to many people other than computer experts. I hope that has made it a good example for this thought experiment.

We humans could like an AI such as a RecipeExpert. We already like many of our cell phone apps. Some people even say they “love” them, though clearly that’s an exaggeration.

Similarly, although an AI like RecipeExpert could be designed to say it “loves” us, we wouldn’t accept that as an expression of genuine love — not like the love our pets, family members and lovers can have for us. But an AI could genuinely like us, if that means that it finds us interesting, in its terms, and is willing to spend time interacting with us.

Studies language, cognition, and humans as social animals

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store