Human intelligence has been the defining property of the human race for thousands of years. However, with the recent rise of machine learning, amplified by breakthrough results in deep learning, this opinion is cast in questionable light, as the opposing question starts to become more relevant and polarizing than ever before:
Will computers be able to think in a way that is equivalent – or even superior – to ours?
The meaning of the word to think is rather ambiguous in this context. Thus, it’s reasonable to rephrase the question at hand in order to avoid confusion over definitions.
The hypothesis in question, namely whether computers can think in a way that can be considered equivalent to the way humans do, is strongly related to the Strong AI position. Strong AI – broadly speaking – describes the view that computers or computer programs are, in principle, able to understand the data they’re working with (for a more detailed description, refer to ). Weak AI, in contrast, describes the view that machines will always merely simulate the signs of intelligence, without producing any inherent understanding by themselves.
By way of example, a human doesn’t have to understand anything about chess in order to play a famous opening they studied beforehand. Compared with this, in order to come up with sophisticated moves themselves, an understanding of chess is in general assumed to be required, at least for humans.
Because of given limits regarding the length of this essay, I will hereinafter constrain myself to the debate of syntax and semantics, as well as a closely related discussion of meaning. ¹
John Searle, one of the most popular critics of Strong AI, draws heavily upon the difference between syntax and semantics. His most popular argument, the Chinese Room argument (compare ), exemplifies this well:
It’s a thought experiment that – in short – considers a non Chinese speaking person that is locked inside a room. This person is given multiple batches of Chinese writing, resembling knowledge and questions, as well as instructions in English (that they do understand). Following these instructions, they output Chinese symbols, as determined only by the instructions and the given Chinese batches. However, these given batches are not understood at all by the person, but can only be matched against other Chinese symbols purely by their shape and then manipulated according to the English instructions. Nonetheless, if the instructions and batches of Chinese writing are sophisticated and comprehensive enough, from outside it will seem as if there was an actual Chinese speaking person inside the room.
In this thought experiment, the person resembles the workings of a computer, with the English instructions resembling a computer program that describes how to manipulate data, i.e. the Chinese batches of text.
Several implications have been drawn from this or similar thought experiments. For example, Searle’s own interpretation is that a computer, i.e. the person locked inside the room, can never really understand Chinese, but only simulate this understanding. It could manipulate symbols by following given instructions and maybe do so in a way that would seem like real understanding to outsiders, but it wouldn’t derive any meaning from manipulating the symbols.
However, Searle seems to mix up the analogy here, as the person in the room only resembles the CPU of a computer, which in turn has never been claimed to understand the data it is working on. Instead, the whole room would have to be considered, and for this, it’s not as easy to deny understanding.
Alan Turing also took the opposing side when he originally proposed his famous Turing test, attributing the ability to think to a machine that would eventually pass the Turing test (see  and ).
According to Searle, computers will always and inherently miss a connection to the real world, which he claims to be mandatory for real apprehension. Only manipulating syntax would not suffice for capturing semantics.
However, this statement begs the question how such a connection would have to look like. Searle claims that human thoughts “concern states of affairs in the world” (), without losing a word about the modality of this concernment. He doesn’t make clear in what way human thought is any different from the thought-like process depicted in his Chinese room argument.
For example, when a human thinks about a subsistent apple, it is assumed that there’s a mapping from reality to representation in the human mind, taking place via sensory input. Searle doesn’t explain what makes this mapping qualitatively different from any other mapping that might exist even in the Chinese room scenario. In a way, our minds are all locked up in their rooms, separated from reality and just acting on information that’s coming through, in form of little snippets of sensory information.
Hence, Searle expresses the opinion that without some unspecified connection to its object, a thought can only remain pure syntax and doesn’t have any semantics, i.e. meaning, attached to it.
Marvin Minsky approaches this question in . He interprets human understanding as the formation of statistical networks in the brain and interpretation as the linking of one kind of information to another. This also makes sense from a biological perspective, which is in turn replicated in connectionist models; see  and .
Although this is oftentimes implicitly assumed to be the case by society in general, there’s no such thing as one true meaning that’s inherent to reality. Minsky also recognizes this, when he mentions that “our culture tries to teach us that a meaning really ought to have only a single, central sense. But if you programmed a machine that way, then, of course it couldn’t really understand. Nor would a person either, since when something has just one meaning then it doesn’t really ‘mean’ at all.” His remarks also offer direct opposition to Searle’s idea that thoughts require to have some kind of special connection to reality in order to really mean something. Minsky states that even “in the human condition, our mental contact with the real world is really quite remote. The reason we don’t notice this, and why it isn’t even much of a practical problem, is that the sensory and motor mechanisms of the brain […] ensure enough developmental correspondence between the objects we perceive and those that lie out there in raw reality;”
There’s another argument Searle makes and it’s that “brains are biological engines; their biology matters.” According to him, “it is not, as several people in artificial intelligence have claimed, just an irrelevant fact about the mind that it happens to be realized in human brains.” This assertion, however, raises the question of what exactly would be the substantial differences that distinguish the human mind from any other machine, especially the universal machine that is the digital computer. What can the brain do that computers can’t? – On a lower level than think.
There have certainly been groundbreaking advances in machine learning recently, e.g. GPT3 is quickly approaching the Turing test boundary (see  and ). Still, a lot of questions in the fields of cognitive neuroscience and machine learning, concerning both the exact workings of the human brain and the full capabilities of AI, are unanswered to date and leave room for speculations.
Two of the debates most prominent voices, Searle and Minsky, offer various arguments and examples for their respective sides. I personally believe that Searle is lacking some clarification for his very restrictive opinions. After all, – freely adapted from Laplace – extraordinary claims require extraordinary evidence. I thus side with Minsky’s perspective and come to the conclusion that the relevant differences between a human brain and an adequately programmed computer are less significant than is generally assumed. This in turn has exciting implications on possible future developments in the field of machine learning, especially when comparing the trial-and-error principle of evolution that brought forth human intelligence over the last hundreds of thousands of years with the principle of intelligent design that fabricates new computer architectures and algorithms.
¹ Other worthwhile points to discuss might involve free will and its relation to determinism, solipsism, or whether consciousness is an illusion.
 Raymond ST Lee. Ai fundamentals. In Artificial Intelligence in Daily Life, pages 19–37. Springer, 2020.
 John Searle. Can computers think. Searle, J. Minds, Brains, and Science, pages 28–41, 1984.
 Alan M Turing. Computing machinery and intelligence. In Parsing the turing test, pages 23–65. Springer, 2009.
 Graham Oppy and David Dowe. The Turing Test. In Edward N. Zalta, editor, The Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University, winter 2020 edition, 2020.
 Marvin L Minsky. Why people think computers can’t. AI magazine, 3(4):3–3, 1982.
 Warren S McCulloch and Walter Pitts. A logical calculus of the ideas immanent in nervous activity. The bulletin of mathematical biophysics, 5(4):115–133, 1943.
 F Rosenblatt. The perceptron: A probabilistic model for information storage and organization in the brain. Psychological Review, 65(6):19S8.
 Katherine Elkins and Jon Chun. Can gpt-3 pass a writer’s turing test. Journal of Cultural Analytics, 2371:4549, 2020.
 Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. arXiv preprint arXiv:2005.14165, 2020.
 Alan M Turing. Can digital computers think? The Turing Test: Verbal Behavior as the Hallmark of Intelligence, pages 111–116, 2004.
If you want to give me some feedback or share your opinion, please contact me via email.
© Niklas Bühler, 2021 RSS / Contact me