I have had at least one article on the latest "thing" in technology known as "artificial intelligence." I put AI in scare quotes because I believe, as does Mr. Dunn, that machines, designed by humans cannot achieve true intelligence. As Mr. Dunn notes, we do not understand, nor can we break down human intuition, common sense, and our ability to use what we learned in one situation and apply it to others. Unlike the panic that the mainstream media is spewing over AI, I believe that AI is just another tool.
People are often bedazzled by the speed and seemingly awesome power of computers, but at their heart, computers provide three tools that we have been using for thousands of years: word processing, database building and manipulation (think of the telephone book), and spreadsheet functions (think mathematics and arithmetic). But gentle readers can read about it at the American Thinker in an article by J. R. Dunn entitled Artificial Intelligence: The Facts.
So what is the problem here? First and above all, when we speak of AI in the 21st century, we’re discussing two distinct and separate types as if they were one and the same thing. These are what I call “App AI,” which includes ChatGPT and the numerous AI art apps making the rounds, and “General Intelligence AI,” the movie-style HALs and Skynets capable of taking over everything and doing what they damn well please.
Up until now, all that we’ve seen are App AIs. These are software, generally operating on neural nets, devoted to one particular task – text creation or artwork – that feature algorithms capable of modifying the responses of the program as it “learns” more about the task. AI learning is accomplished through “supervised learning,” in which mere humans set the parameters and goals, oversee the process, and examine and judge the results. Until now this human interaction has proven strictly necessary -- “unsupervised learning,” when it has been attempted, usually goes off the rails pretty quickly. The App AI’s single task comprises their entire universe and they can’t simply take what they’ve learned and apply it to other fields. As Erik J. Larson puts it in The Myth of Artificial Intelligence (which should be read by anybody with an interest in the topic), “…chess-playing systems don’t play the more complex game of Go. Go systems don’t even play chess.” So no such AI is ever going to quit sampling internet imagery and try to take over the Pentagon. (This also applies to the guy who claimed, a couple weeks back, that ChatGPT is already “running the financial system.”)
There’s been a lot of speculation recently as to whether these systems will supplant humans working in particular fields. The answer is no -- not yet, and probably not ever. A few weeks ago, Monica Showalter, esteemed by all AT readers, ran a Turing Test of sorts on ChatGPT. She entered the prompt “Write a piece on the future of the airline industry in the style of Thomas Lifson.” What she got was a bland, gassy, ill-written piece filled with clichés, non-sequiturs, and outright errors, none of which, I can state with authority, has ever been characteristic of Thomas’s writing. It’ll be a long time before ChatGPT takes the reins here at AT.ChatGPT will not replace human writers anytime soon, and I think never. As far as General Intelligence AI, I agree with Dunn that we are a long way off, if we ever will be there. We just don't understand or even have a clue how the human mind actually works. Note that knowing where certain things happen in the brain is not the same thing as understanding things like intuition and common sense. How do we take experience in one thing and apply it to something new? It will take this kind of knowledge and understanding before it can happen.
A tool for propaganda, depending on what the program has been trained on.
ReplyDeleteMike-SMO, it is good to hear from you. Propaganda, which is the use of lies and deceptions to cover over the true actions, particularly of governments has been around for a long time...maybe since man first walked the earth. Propaganda will become more difficult to detect with products making deep fakes more easily available. Still, I don't think that the intelligent readers such as yourself should be unduly afraid of AI.
ReplyDeleteBut I can always be wrong. We will have to see.
Wade