The Hard Problem of Consciousness and Repercussions for AGI Lecture Notes from the 109th Diogenesis Lecture on 2022/5/15 • Read time 5min This lecture was four and a half hours long; the outline here by no means captures everything that was discussed, these are just some notes. There is a very large body of literature that has built up fairly rapidly concerning the topics in today's lecture and if you focus on the thought experiments we talk about at the end, they will provide you an incredibly concise way to anger AGI researchers. More narrowly, they will allow you to invectively attack physicalist, representational, functional, behaviorist, reductive, and many other theories of consciousness and mind that are otherwise standard. It's important to understand that every framework for consciousness that people have formalized falls apart under the scrutiny of the problems we will discuss today and this means there is a real concern that I hope to limn as to whether or not consciousness exists in the first place. Everything I'll be going over will be linked in the lecture notes provided after we're done and I'll be directly reading from lots of materials but hopefully the order in which I make connections between topics and the direction in which those connections point will evince a somewhat original and maybe more nuanced view that doesn't just provide problems for AGI research but opens a question as to whether or not we ourselves have consciousness. THE HARD PROBLEM OF CONSCIOUSNESS (aka the Mind-Body Problem): Wikipedia's very basic introduction to the topic offers a good two-sentence summary. IEP's better overview. Chalmers quote, ""... even when we have explained the performance of all the cognitive and behavioral functions in the vicinity of experience—perceptual discrimination, categorization, internal access, verbal report—there may still remain a further unanswered question: Why is the performance of these functions accompanied by experience?" Thomas Nagel's is also a very heavy hitter in this field of study and originally called this the Mind-Body Problem. More Chalmers - Chalmers argues that experience is more than the sum of its parts. In other words, experience is irreducible. Unlike a clock, a hurricane, or the easy problems, descriptions of structures and functions leave something out of the picture. These functions and structures could conceivably exist in the absence of experience. Alternatively, they could exist alongside a different set of experiences. QUALIA: SEP Consciousness - First two paragraphs of 4.2 to illustrate link or bridge this provides to consciousness and experience. SEP Qualia - First paragraph on the page for the definition; 5 for the Explanatory Gap, 6 as possible counter but can skip for talk. What it means to not experience, Philosophical Zombies. THOUGHT EXPERIMENTS: - The Molyneux Problem; 3 and the end of 5 go over empircal tests. - The Chinese Room Argument; 2.1 Leibniz' Mill and say something about relational capacities of the parts being the likely residency of the thought; 2.3 is good too, 3 and 4 if people want to talk more. > We might summarize the narrow argument as a reductio ad absurdum against Strong AI > as follows. Let L be a natural language, and let us say that a “program for L” is a > program for conversing fluently in L. A computing system is any system, human or > otherwise, that can run a program. > 1. If Strong AI is true, then there is a program for Chinese such that if any > computing system runs that program, that system thereby comes to understand Chinese. > 2. I could run a program for Chinese without thereby coming to understand Chinese. > 3. Therefore Strong AI is false. - Searle's original paper. OUR OWN SELVES, AND THE EXPERIENCE OF SELF: I want to point out here that we can apply all of the prior problems back onto our own consciousnesses and what we take as a given or self-evident experience of self; this is to say that when explaining how humans arrive at consciousness we seem to return the conclusion that, just the same as computers would never be conscious, neither would we. - Note the critical lack of self-reflection that the vast majority of humans maintain; the notion of a 'self-thought' given earlier is not attained by most people unless directly prompted by someone else. - Personhood seems perversed here and this would defeat the capacity for any kind of robust consciousness, making most humans 'non-people' or without any special property akin to 'consciousness'. - Propose some kind of reverse-Turing test for humans to see if they are truly conscious. Experiencing sadness is not enough to have consciousness. Being conscious means you don't only have the syntax of the world but the semantic of the world; semantic would mean you understand what sadness is on top of merely experiencing it. So it is not enough to experience sadness, you must also know what it is like to have, or what it means to be something that experiences, sadness. Similarly, if you understand what it is like to be something that experiences sadness, you can also understand what it is like to be something that experiences happiness, and if you understand what it means to be happy and you don't stop doing the thing that makes you sad, you aren't conscious. This is why people with clinical depression aren't conscious.