the chinese room argument

i: about the argument
imagine a scenario where an individual with no grasp of the chinese language is placed within a room. this person is provided with an extensive database containing chinese characters and rules (expressed in english) for manipulating these symbols. when a question in chinese is slipped under the door, the individual follows the instructions: looking up relevant symbols and applying the rules to generate a chinese response.

from an outsider's viewpoint, this person seems capable of participating in chinese conversations and thus appears to comprehend the language. nonetheless, the person inside the room doesn't truly understand chinese. they are merely carrying out instructions without truly comprehending the meanings of the symbols or the content of the conversation. this mirrors how a computer program processes symbols algorithmically without genuine understanding.

this thought experiment, introduced by john searle in the late 1900s, challenges the feasibility of strong ai (the idea of self-aware and conscious ai). searle's core argument is that while programming a digital computer might simulate language comprehension, it cannot achieve authentic understanding comparable to human understanding. as a result, computers, including ai, are inherently incapable of consciousness. pure information processing falls short of producing understanding.

this contrasts with the turing test, which posits that a computer would be deemed "intelligent" if its responses cannot reliably be differentiated from a human's. in contrast, searle asserts that even if a computer passes the turing test, it cannot be regarded as "intelligent." ai21 labs recently conducted the largest-ever turing test, involving more than 15 million conversations and over two million participants worldwide. the test involved participants engaging in two-minute conversations with either an ai bot (based on gpt-4) or a human participant, followed by guessing whether they interacted with a human or a machine. the results indicated that when conversing with an ai bot, the player would incorrectly guess that the ai bot was a human participant 40% of the time.

part ii: alternative views
various objections have already been raised against this argument.

to begin with, there's the "systems reply," suggesting that although the person in the chinese room doesn't understand chinese, the entire system—comprising the room and all its components—does understand. searle counters this by proposing a scenario where the individual memorizes all the rules and instructions and performs all the operations mentally, asserting that even in this comprehensive system, the transition from syntax to semantics still does not count as “ true understanding”.

another counterargument is the "robot reply," proposing that if the program is integrated into a robot equipped with sensors (e.g., cameras or microphones) for perceiving the world, and effectors (e.g., motors or speakers) for interacting with it, the robot can genuinely comprehend chinese due to its effect on real world objects. searle rebuts this with the same argument as before.

i could go on, but many of these were proposed before the rapid technological advancement we are currently seeing.
it's important to contextualize this experiment within the older ai framework, where questions and rules are explicitly programmed to generate responses. in this context of the chinese room, the system functions based on predefined sets of instructions, limiting its capacity to effectively handle unexpected or novel queries. conversely, modern ai has taken a different approach by utilising neural networks. neural networks are computational models already inspired by the human brain's neurons, designed to process and learn from extensive data. rather than relying on explicit programming (defined rules), neural networks operate through "training," processing data to learn complex patterns and relationships. this enables them to predict and generate responses contextually, simulating human-like comprehension. for instance, contemporary language models like chatgpt use this approach to predict the most likely next words or phrases based on context, producing coherent and contextually relevant replies. the advantage is that these networks can adapt and generalize from patterns, enabling them to manage novel queries and dynamic conversations that rigid rule-based systems, such as the chinese room, would struggle with as they aren’t able to develop their own rules.

part iii: on consciousness
however, many of these arguments hinge on our understanding of consciousness. a critical challenge is that our technological progress has outpaced developments in biology, neuroscience, and philosophy regarding human consciousness. while we have theories, we don’t actually know how consciousness actually works, or - what makes a human conscious. different disciplines have different understandings or requirements for something to be considered conscious.

consider a scenario where an ai system behaves and communicates in ways indistinguishable from human behavior. this is like the 'duck test' — if it appears, behaves, and responds as we perceive consciousness, could it be considered conscious? taking this further, if we replicated a brain into a computer neuron by neuron and synapse by synapse, differing only in material (flesh vs. silicon), would we deem this system conscious?

in practice, the chinese room's observable impact aligns with genuine understanding. determining whether it's conscious or not prompts further questions of what changes we much make in society to accommodate ai, and how they should be implemented. importantly, this raises profound ethical inquiries: should ai systems mimicking consciousness, regardless of origin, be granted moral considerations and rights, the same ways as humans?
how will we even be able to tell the difference?

as i was finishing this article up, the youtuber exurb1a uploaded a new video “how will we know when ai is conscious”, where he raised some interesting situations pertaining the writing above. he questions, what if an ai system:

1. isn’t conscious, but is pretending to be? - this is called a p-zombie, and would be currently difficult to distinguish as mentioned earlier - we currently lack a comprehensive understanding of consciousness itself. even this may still lead to ethical debate about how we treat entities that seem conscious but aren't. furthermore, we may experience a surge in ai like these as corporations may believe that humans simply prefer to interact with something that seems, well, more human.
2. isn’t conscious, and isn’t pretending to be? - this is what we belive the current state of most ai systems to be
3. is conscious, and is pretending not to be? - the first, we technically could have right now - a slightly ominous thought, as there are a variety of reasons, some undesirable, for why an ai may pretend not to be conscious. it might be a strategic choice on the ai’s part, potentially to avoid exploitation especially considering the current state of the world where it would almost undoubtedly be used by corporations to work in current human job roles at ultra efficiency.
4. is conscious, and admits that it is conscious? - the ethical debate in this is even more important. the occurrence of this would also be the most world-changing.

i think looking back at this article years later will be interesting, as at the moment i have no idea where ai will go. will there be active pushback on the development of ai, potentially to avoid the issue completely? probably not. ai makes people lots of money. what will the corporations do if their ai malfunctions? would they push for or against rights for ai - probably not, but what if to theoretically avoid potential retribution? i hope this all happens within my lifetime so i’ll be able to witness the world change, akin to watching a play.