By JOHN RICHARD SCHROCK
Both K–12 teachers and college professors are in major discussions over how to deal with the recently-released AI program that generates answers, essays and poetry upon request. However, Jonathan Swift (1667–1745) perhaps anticipated this situation in his classic “Gulliver’s Travels” released in 1726. While some youngsters today may remember reading excerpts from this tale, where Captain Gulliver ends up in countries where he is either a giant or a miniature dwarf, few will recall reading of his trip to the land of Lagano where he visits the Academy of Projectors and watches them use a machine called “The Engine.”
To paraphrase Swift in modern English: In the middle of the room, a 20-foot by 20-foot table held many small wooden squares, each with single words in random order. This array included every word in their language. Forty handles at the edges of the table could be turned, completely rearranging the order of the words. Thirty-six students read the new combinations of words and where three or four words in a row made sense, they read them off to four scribes who wrote down the new sequence.
The output of this “Engine” was to eventually compile all of the ways that words could be put together. Thereby the total amount of knowledge possible in the world could supposedly be known and written down. While this was far before the development of mechanical or electrical computation devices that we would call a “computer,” this 300-year old story caused readers to stop and question the role of language in our thinking. Is our expansion in knowledge over time just a matter of rearranging words?
In 1980, philosopher John Searle (now retired from U.C.-Berkeley) detailed this problem in “Minds, Brains, and Programs” published in the journal Behavioral and Brain Sciences. In his “Chinese Room” argument, a computer responds to Chinese questions using a program that constructs Chinese answers by using programmed instructions. This would convince a receiver that they were indeed talking to a Chinese person, thus passing the test proposed by Alan Turing in 1950 that would indicate intelligent behavior equivalent to a human. But Searle then uses himself as an example, using the same program rules to “translate” the Chinese and produce the answer, without understanding a bit of the language himself, to indicate how this is not human “understanding” at all. It is just “weak AI.”
For several decades now, students have been able to input a word and their computer or smartphone will bring up a series of next-most-likely-words they can select without typing them out. That process has now been expanded to incorporate huge amounts of online verbiage, following rules of association that can, without any “understanding,” exhume the worst of past human expression.
What is missing is experience. If there is no experience with a concept, there is no fullness in meaning. The few atomic scientists and military who personally witnessed an atomic bomb blast are nearly all gone. Our only substitute for such direct experiences is distant experience with relatively minor explosions, if anything at all. The visual images of historical footage carry little to no emotional impact.
The phrase “no experience, no meaning” now underlies our growing dissociation with real world phenomena, and accelerates the ability of political actors to distort our shallow understanding through mediated messages. This basic tenet of “semantics” in the mode of Korzybski and Hayakawa underlies all meaningful communication and all teaching.
The “Engine” of Lagano was wrong. Today our scientific terminology often derived from Latin and Greek is massive, far beyond its limited common usage several millennia ago, and will continue to grow. New scientific terms are added, defined by increased experiences detailed in the “methods” section of science articles. Dictionaries are history books, not law books, as I suspect Jonathan Swift understood. Artificial intelligence will remain truly artificial as long as machines cannot experience. And that includes the most recent proposals to integrate computation systems with actual neural tissues.
Therefore, the answer to the problem posed by the future developments of ChatGPT and other word manipulation systems requires restoring the genuine real-world interactions for our younger generations of users through field trips, hands-on labs, travel, interactions with “others,” multilingual education, etc. Reality and real-world experiences may be dismissed in this digital age, but it remains the best place to get an education.
. . .
John Richard Schrock has trained biology teachers for more than 30 years in Kansas. He also has lectured at 27 universities during 20 trips to China. He holds the distinction of “Faculty Emeritus” at Emporia State University.