I don't get to hear the Glenn Beck radio show ever day like I did when I was working. But I did here one recently in which Beck was ranting about the dangers of artificial intelligence (AI) taking over the world. According to Beck, AI would take all the jobs, and we would find ourselves with nothing to do. I listened with interest but just couldn't get worked up. Despite what the tech gurus want to believe, computers can never replicate, nor can they replace the human mind.
At First Things Thomas Fowler has an article entitled AI Doesn't Know What It's Doing.
Modern AI is based on ideas of human knowing that stem from the British empiricist tradition, in particular the philosophy of David Hume. Hume envisioned the body as a composite of discrete physical systems, with the senses sending their reports to the mind, which then worked on these reports. These “reports” he termed “impressions,” which gave rise to “ideas”:
I venture to affirm that the rule here holds without any exception, and that every simple idea has a simple impression, which resembles it; and every simple impression a correspondent idea.
Hume presents a theory of knowing in which senses deliver impressions, which we process as ideas. Once we have ideas, we can reason with them, either by means of logical inference, or directly as “matters of fact” (empirically grounded facts, including scientific laws). As for general ideas, they are nothing more than particular representations, connected to a certain general term. This theory quickly leads to nominalism, the belief that abstract entities do not exist and that any talk of entities such as “mankind” refers only to collections of individuals. Hume recognizes that we have such universal ideas in our minds, but they are mere labels, bearing no relation to reality. He rejects the longstanding opinion that there exist universals in themselves.
Hume was never able to explain how we arrive at forms of knowledge such as science, mathematics, and history. What “impression” gave rise to Einstein’s field equations for general relativity? Because every idea must be associated with a precedent impression resembling it, Hume could not explain how we can do something as simple as recognize a thing that is in a different position than when we first saw it—a problem that bedevils AI systems used in autonomous cars. Nor was he able to explain how it is possible to have knowledge of almost anything without recognizing abstract entities as real. For example, the statement “Beethoven’s Fifth is a great symphony” uses abstract entities as both subject and predicate. Had there never been any performance of the notes Beethoven wrote, the statement would still be meaningful and true. And the term “great symphony” refers not to a collection of performances of music, but to a real characteristic of a certain type of music composition.
You see the problem, don't you? We don't really know how the human mind works, but we know it doesn't operate on a Humean model of knowledge. There is something...deeper...that drives the human mind. We are unique in the animal kingdom in a number of ways. The most obvious is that we know good and evil
Only in some cases, such as scientific and most historical writing, is the literal meaning of a text its principal meaning. For many works, especially works of literature and philosophy, the message or theme requires a holistic understanding of the text; it is not conveyed by any piece or excerpt that AI can scan. Often, indeed, the meaning of a work may depend on the reader’s imaginative reception of it, as is the case with poetry. And many texts have multiple levels of meaning, so that a literal reading may be true as far as it goes, while being less important than the symbolic reading. Or the real meaning of a text may be the exact opposite of its surface meaning, as in satirical writing. The purpose of much theological and poetic writing is to open a window onto a numinous world, and texts in disciplines such as philosophy may depend entirely on abstract ideas and entities. The reader of any of these kinds of texts must be able to perceive the reality behind the words—reading and understanding the entire text (including very abstract ideas and what they entail or imply), taking into consideration the writer’s goal, presuppositions, and biases, and then relating the work to others in order to ascertain its thoroughness, accuracy, and contribution value. Only thus can a thorough view of the subject emerge.
On this ground alone, it is plain that large language models will never replicate human knowing. AI can parrot what real minds have thought and said on these topics, and thus sound intelligent. What it cannot do is understand material.
...snip...
The goal of human knowing is always to know something about reality, regardless of whether that knowledge has operational value. By contrast, neither an animal nor an AI seeks the reality of the real. AI must employ symbols, which have no meaning except that assigned to them by someone outside the computer system. The implication for the uniqueness of humans is straightforward. Those who would assimilate humans to computers use an argument with this compound premise: Humans are material only and Human functions can be reduced to algorithms. The conclusion is, Computers can duplicate human minds. But if computers cannot duplicate human minds, then it follows that either (or both) Humans are not material only or Human functions cannot be reduced to algorithms. These are fairly momentous points, and they suggest one reason why understanding what computers can and cannot do is important.
Human knowing operates on a principle that is radically different from AI’s Humean paradigm. Humans know by means of an integrated system of sensing, motor skills, and the brain. We have direct contact with reality, and we are able to know realities that exist beyond the realities we immediately perceive. This form of knowing is supremely creative. It encompasses the way in which we understand situations we have never encountered and generate new theories about reality. Humans can “think outside the box”; AI cannot. AI can, of course, generate “ideas,” understood in the rather limited sense of data structures or random chatbot statements. That is not how humans develop new theories or deal with unexpected situations. Our perception of reality is unlike anything that can be achieved by any paradigm based on separation of functions. AI algorithms cannot creatively and analytically think through a question, using information learned from reading and research, bringing to bear a critical eye for discerning what is valuable and a perception of reality for synthesizing new ideas. They can only ape human intelligence. The AI paradigm reacts to stimuli in the form of sense-type data or website texts; it cannot react, except very indirectly, to any underlying reality. It does not know what it is doing.
Children learn, say arithmetic, by first memorizing tables of examples. 1+1=2. 1+2=3 and so forth. It gets more complicated, for instance 3+3=6, but 3x2 also equals 6. Slowly they begin to know what they are doing, why they are doing it, and to understand. Computers may be able to replicate these feats, but they will never understand. Which is why I do not fear AI, though I do believe that it will present many problems. In the end, it is just another tool, to be used for good and (sadly) for ill.
No comments:
Post a Comment