Joe, you wrote, “ If we are able to use AI effectively, it will enable us to examine all perspectives on an issue, get every one to participate in analyzing an issue. Incorporate as many diverse opinions as possible and examine each singly and in interaction with the others to determine with as much accuracy as possible, and as objectively, based on experiment and experience, as possible, which is the best alternative. If all information is used in the analysis, without rejecting any alternative out of hand, a new and open democratic capability that is at the same time a technocratic form will be possible.”
I think that argument depends on a much too optimistic assessment of democratic decision making. Recent electoral experience has taught us that a great many voters make their choices based on very superficial information, not deep analysis of issues. Even when they might be persuaded that choices resulting in short-term benefits will lead to long-term ill-effects, many will still choose short-term benefits for themselves that will have long-term ill-effects on others.
Even if we can create AIs that respect “our human values”, we face a crisis in agreeing among our human selves what those values are. What are the relative importances of (1) individual human happiness, (2) individual human longevity, (3) survival of our species, (4) survival of other species? In the current state of affairs on our planet, it doesn’t seem that all of those values are compatible. If we can’t agree on which of those is foremost, or how they can be reconciled, then our AIs will serve as agents of increasing conflict between human groups that hold different fundamental values as most important.