Would you want an experienced self-driving car or the latest general version?
The recent development of many artificial cognitive agents, such as robotic devices and large-language models for text generation, raises an interesting general question about the value of individualization.
We humans, taking ourselves as a pinnacle of an “intelligent” species, are very much aware of our own existence as individuals with limited existence in time and space. We are all similar in many ways, but we are all different in the specific ways that we and our brains have grown and developed since our conception, and those differences are exhibited in our behavior.
Digital computer software, in contrast, can be copied exactly, and those copies can exist across larger reaches of time and space than we humans can. Or a single copy located on a server machine somewhere can communicate electronically with many other cognitive agents, including people, at vast distances apart.
So the question arises: What are the advantages and disadvantages of individualization of cognitive agents?
To explore that question in a more concrete way, it is interesting to consider the example of self-driving cars.
On the one hand, it should be possible for all models of a particular self-driving car to use copies of the same software, with regular updates like the ones that the operating systems of our computers undergo, dispatched from some controlling organization. That would make all those cars operate the same way in the same conditions, and could enable them to synchronize their behavior when they are operating near each other in time and space.
On the other hand, considering the range of possible driving conditions in different parts of the world, including differences in weather and differences in how pedestrians may behave in different cultures, it may be safer to have self-driving cars adjust their behavior to local conditions, possibly learning from their individual remembered experiences. But in that case, individual vehicles might learn different things from their experiences, which could even lead them into behavioral conflicts.
Similar questions arise for artificial cognitive agents of all kinds. For example, text generators based on large language models may generate texts that are inconsistent in their details with one another.
Such examples suggest that, as is the case for us humans, individualization can provide more effective behavior, especially locally in time and space, than generalization does. But they also suggest that individualization inevitably leads to some amount of inconsistent or even conflicting behavior among individuals.