We will develop new worlds and beings with powers greater than our own. How do we maximize them for good?
Over the past two decades, the philosopher David Chalmers has established himself as a leading thinker on consciousness. He began his academic career in mathematics but slowly migrated toward cognitive science and philosophy of mind. He eventually landed at Indiana University working under the guidance of Douglas Hofstadter, whose influential book “Gödel, Escher, Bach: An Eternal Golden Braid” had earned him a Pulitzer Prize. Chalmers’s dissertation, “Toward a Theory of Consciousness,” grew into his first book, “The Conscious Mind” (1996), which helped revive the philosophical conversation on consciousness. Perhaps his best-known contribution to philosophy is “the hard problem of consciousness” — the problem of explaining subjective experience, the inner movie playing in every human mind, which in Chalmers’s words will “persist even when the performance of all the relevant functions is explained.”
Chalmers is now writing a book on the problems of a technological future we are fast approaching: virtual reality, digitally uploaded consciousness, artificial intelligence and more. I met with David Chalmers in his office at New York University to discuss this future and how we might relate to it.
Prashanth Ramakrishna: Artificial general intelligence, A.G.I., is a system capable, like us humans, of performing open-ended tasks independent of specific problems or contexts — conversation, common-sense reasoning, experiential learning and so on. The popular science fiction example is HAL 9000 from the film “2001: A Space Odyssey.” Is A.G.I. achievable? And if it is, does our civilizational conversation seem sufficiently robust on this topic?
David Chalmers: I think artificial general intelligence is possible. Some people are really hyping up A.I., saying that artificial general intelligence is just around the corner in maybe 10 or 20 years. I would be surprised if they turn out to be right. There has been a lot of exciting progress recently with deep learning, which focuses on methods of pattern-finding in raw data.
Deep learning is great for things we do perceptually as human beings — image recognition, speech recognition and so on. But when it comes to anything requiring autonomy, reasoning, decisions, creativity and so on, A.I. is only good in limited domains. It’s pretty good at playing games like Go. The moment you get to the real world, though, things get complicated. There are a lot of mountains we need to climb before we get to human-level A.G.I. That said, I think it’s going to be possible eventually, say in the 40-to-100-year time frame.
Once we have a human-level artificial intelligence, there’s just no doubt that it will change the world. A.G.I.s are going to be beings with powers initially equivalent to our own and before long much greater than our own. To that extent, I’m on board with people who say that we need to think hard about how we design superintelligence in order to maximize good consequences. How robust is the current conversation? I find there’s more and more attention among A.I. researchers to making A.I. beneficial in the short term and consistent with a good future for humanity in the long term.
P.R.: I caught you using the word “beings.” Are you equating general intelligence with consciousness?
D.C.: I like to distinguish between intelligence and consciousness. Intelligence is a matter of the behavioral capacities of these systems: what they can do, what outputs they can produce given their inputs. When it comes to intelligence, the central question is, given some problems and goals, can you come up with the right means to your ends? If you can, that is the hallmark of intelligence. Consciousness is more a matter of subjective experience. You and I have intelligence, but we also have subjectivity; it feels like something on the inside when we have experiences. That subjectivity — consciousness — is what makes our lives meaningful. It’s also what gives us moral standing as human beings.
P.R.: Even if consciousness is a reproducible epiphenomenon of the right information-processing system endowed with the right representational structures, there will always be an opaque veil separating behavior seemingly inspired by subjective experience from behavior actually inspired by subjective experience. If our ethical obligations to objects matter only insofar as those objects are conscious, then how should we deal with our inherently ambiguous ethical obligations toward A.I.?
D.C.: In philosophy this is the ancient problem of other minds. How do you know whether another person or system in general has a mind? I know that I have a mind. Descartes says, “I think. That’s the one thing I’m certain of. Therefore, I am.” But, when it comes to other people and to computers, you’re not going to have that degree of certainty.
What should the criteria be? Is just doing sophisticated things enough to convince you that a system is conscious? Winning a game of Go certainly is not. Being able to carry on an intelligent conversation would be a start. Maybe an A.I. system that could describe its own conscious states to me, saying, “I’m feeling pain right now. I’m having this experience of hurt or happiness or sadness” would count for more. Maybe what would count for the most is feeling some puzzlement at its mental state: “I know objectively that I’m just a collection of silicon circuits, but from the inside I feel like so much more.”
P.R.: Some people might argue that if I can be sure that I’m conscious and if there are varying degrees of similarity between me and other beings of potential consciousness, then I can make probabilistic judgments about the consciousness of those beings.
D.C.: For me, the way to get some purchase here is to think about gradually transforming yourself into an A.I. You be the A.I.: gradual uploading. Gradually replace your neurons, one at a time, with computer parts or upload them to a computer. You start as a fully biological system, and then you’re three-quarters biological and one-quarter silicon, and then half biological and half silicon, then one-quarter biological and three-quarters silicon, and finally you’re a fully silicon system. If you make it a functionally perfect simulation throughout, then you’re going to be there till the end still saying, “Yup, I’m still home!” If it’s a proof, it’ll only be a proof for you. Someone else can still say, “I think you turned into a zombie.”
P.R.: One can imagine not simply becoming the A.I. but nondestructively merging with it. In a future where everyone is augmented by A.I., where we all have perfect computing power, perfect memory, perfect ability to synthesize and deploy knowledge, would collaboration become obligatory? There would, after all, be no obstacle except indeed collaboration to solving most every problem regarding human welfare.
D.C.: Hopefully we’ll find a good solution to climate change just like that. And 30 seconds later, dissolve the Israeli-Palestinian conflict? Maybe that’s a harder one. A lot of the irrationalities we have are collective. Some of our irrationalities are tied to our goals, to me rationally wanting my goal and you rationally wanting your goal. Often a solution is that we both get our second most desired outcome, or our third, and so on. People aren’t good at settling for this type of solution, though. Maybe you’d need a whole new module for compromise,for finding goals that we can universalize. But that goes beyond simple means-end instrumental intelligence and more into something reflective like figuring out what our goals should be.
Immanuel Kant thought that morality is part of rationality. There’s the thought that a superintelligent A.I. will turn into a super-moral one, that it will turn into a sort of Kantian being that will only take on goals it can universalize for everyone. That’s a very speculative view of how A.I. will be.
P.R.: Where between the Kantian being and us inventing our way into our own demise do you locate your own conception of the future?
D.C.: I value human history and selfishly would like it to be continuous with the future. How much does it matter that our future is biological? At some point I think we must face the fact that there are going to be many faster substrates for running intelligence than our own. If we want to stick to our biological brains, then we are in danger of being left behind in a world with superfast, superintelligent computers. Ultimately, we’d have to upgrade.
The other way it could go is that new artificial intelligences take over the world and there’s no place for humanity. Maybe we’re relegated to some virtual world or some designated part of the physical world. But you’re right, it would be a second-class existence. At the very least maybe they keep us around as pets or for entertainment or for history’s sake. That would be a depressing outcome. Maybe they’d put us in virtual worlds, we’d never know, and we’d forget all this stuff. Maybe it’s already happened and we’re living in one of those virtual worlds now. Hey, it’s not so bad.
P.R.: This is probably a good time to define what virtual reality is, just because there are multiple ways that we use the word “virtual.”
D.C.: The word “virtual” originally meant a “fake” or an “as if.” A virtual tie is “as if” it was a tie. Over the years, though, the word “virtual” has evolved. Now it means something like “computer-generated.”
P.R.: What, more precisely, is the functional characterization of reality that finds its analog in virtual reality?
D.C.: I take your question to be, if I can rephrase it: In what sense is normal reality real, and can virtual reality be real in that way? It’s a great philosophical question. George Berkeley, the great Irish philosopher, said, “To be is to be perceived.” If something looks like a duck, sounds like a duck and so on, it’s a duck. That’s idealism: The world is all in your mind.
The dominant view, though, is that reality is outside your mind. To be real, you need something more than just appearances; you need some underlying powers or potentiality. The great Australian philosopher Samuel Alexander said, “To be real is to have causal powers” — if you can be something that actually makes a difference. Phillip K. Dick once said, “A real thing is something that doesn’t go away when you stop believing in it.” If you’ve got something that is independent of your mind, which has causal powers, which you can perceive in all these ways, to me you’re a long way toward being real.
Things in virtual realities, at least in principle, have all those properties. Say you’re in a virtual world. There are objects there that you can perceive around you. In a virtual world a virtual tree can fall even if I’m not around. A virtual tree has causal powers. A virtual tree falling can cause people to have experiences. It can break something that it falls on in the virtual world, and it can be experienced. Virtual reality is just a different form of reality. But it’s still perfectly real.
P.R.: Why do you think the original intuition on this topic was precisely the opposite, that virtual reality is nothing but instantiated fantasy?
D.C.: This goes back a long way in the history of philosophy. René Descartes said, “How do you know you’re not being fooled by an evil demon right now into thinking this is real when none of it’s real?” Descartes’ evil-demon question is kind of like the question of a virtual reality. The modern version of it is, “How do you know you’re not in the matrix? How do you know you’re not in a computer simulation where all this seems real but none of it is real?” It’s easy for even a movie like “The Matrix” to pump the intuition in you that “this is evil. This isn’t real. No, this is all fake.”
The view that virtual reality isn’t real stems from an outmoded view of reality. In the Garden of Eden, we thought that there was a primitively red apple embedded in a primitive space and everything is just as it seems to be. We’ve learned from modern science that the world isn’t really like that. A color is just a bunch of wavelengths arising from the physical reflectance properties of objects that produce a certain kind of experience in us. Solidity? Nothing is truly solid out there in the world. Things are mostly empty space, but they have the causal powers to produce in us the experience of solidity. Even space and time are gradually being dissolved by physics, or at least being boiled down to something simpler.
Physical reality is coming to look a lot like virtual reality right now. You could take the attitude, “So much the worse for physical reality. It’s not real.” But I think, no. It turns out we just take all that on board and say, “Fine, things are not the way we thought, but they’re still real.” That should be the right attitude toward virtual reality as well. Code and silicon circuitry form just another underlying substrate for reality. Is it so much worse to be in a computer-generated reality than what contemporary physics tells us? Quantum wave functions with indeterminate values? That seems as ethereal and unsubstantial as virtual reality. But hey! We’re used to it.
P.R.: I’m wondering whether it’s useful to say that virtual reality isn’t simply an alternate reality but is rather a sub-reality of the one we normally occupy.
D.C.: That I think is fair. It’s kind of a multiverse. None of this is saying there’s no objective reality. Maybe there’s an objective cosmos encompassing everything that exists. But maybe there’s a level-one cosmos and people create simulations and virtual realities within it. Maybe sometimes there are simulations within simulations. Who knows how many levels there are?
I once speculated that we’re at level 42. Remember that in “The Hitchhiker’s Guide to the Galaxy” they programmed a computer to find the answer to the ultimate question of life, the universe, everything. Then, after years, the computer said, “The answer is 42.” What question could possibly be important enough that this could be the ultimate question and the answer could be a simple number? Well, maybe the question was “What level of reality are we at?”
P.R.: Do you imagine these virtual worlds tracking unique courses of history of their own? Evolution of political systems, evolutions of culture and so on? My intuition is that they will.
D.C.: There have already been some virtual worlds that have been recapitulations of the history of political philosophy. In the early ’90s, MUDs — multi-user domains — started off as dictatorships or anarchies. There was a guy who created them, and he was the dictator. Then he appointed some wizards and that was the aristocracy. But the wizards didn’t want all this power, so they threw it all onto democracy. Most virtual worlds that exist now are corporatocracies, owned and ruled by corporations. Linden Lab owns Second Life, for example.
P.R.: That poses a problem in and of itself, though. Ownership of physical reality is democratized. There’s no overlord who can turn it on and off or change the rules arbitrarily. Isn’t the lack of democratized ownership a serious risk of virtual worlds?
D.C.: Although, there are going to be, of course, many virtual environments. I don’t know whether we should think about this as, “You choose your virtual world and then you’re always stuck under the governance of the one corporation that owns it.” It may be more likely that we’ll chop and change between many virtual worlds quite regularly. There will be a virtual world you go to for your work, a different one for recreation and a different one for education. No one will run cyberspace, which was William Gibson’s term for virtual reality as a whole.
P.R.: Lastly, I’m wondering whether you think that ultimately there will be a transition from physical reality to virtual reality on a civilizational scale. We’ve talked about artificial intelligence, digital uploading, digitally uploaded selves living on in virtual reality, the transient boundary between physical reality and virtual reality. Will the lights ever just be turned out in this house?
D.C.: It would be surprising. Say physical reality is level-zero reality. There’s always going to be resources to be found there. I suspect we’re always going to need more and more resources. “Let’s blow up the sun and use it to power our computers. Oh wait, that’s not enough! We’re going to need to go elsewhere through the galaxy.” It’s easy to imagine a resource-dependent technological arms race. Such an event would require somehow getting to the outside world.
What I can easily imagine is that 99 percent of the population lives in the virtual world, especially if the physical world becomes dystopic, after nuclear destruction or terrible climate change, say. The virtual world will be much more interesting and pleasant. There will be some hankering. There will be people who just like being in the level-zero world, just like people who like living in cities or people who like living in the country. We’ve got an increasingly urbanized society now, but that doesn’t mean everybody lives in the city. Maybe we’ll have an increasingly virtualized society. But that doesn’t mean everybody is going to be virtualized.
I don’t think virtual worlds are going to be a panacea for problems of humanity. They’ll be like the internet. It’s led to wonderful things. It’s led to awful things. My prediction is that they will have room for the full range of the human condition. In that sense, it will at least be on par with physical reality. Maybe we’ll be able to find some distinctive ways in which virtual reality is better and allows more freedom, more justice. Or maybe we won’t. I think it’s at least an open and exciting future.