In this series, I’ll explore speculative fiction, in particular science fiction or fantasy, as a philosophical tool by interviewing philosophers who write in these genres (see here for part 1). This second part of the series is with Eric Schwitzgebel, a professor of philosophy at UC Riverside, with a PhD from UC Berkeley in 1997. His root specialty is philosophy of psychology, but he has been branching out from this into moral psychology, epistemology, and metaphysics.
You can read a short story by him here.
Can you tell me something about how you got into writing science fiction?
I was an avid reader of science fiction in high school, and I always loved to write poetry, plays, and stories. I set SF mostly aside in graduate school and as an Assistant Professor, but then about ten years ago discovered Greg Egan and realized that there was a lot of philosophical potential in recent SF that I hadn't fully appreciated, and I got into reading it again. On my blog, The Splintered Mind, I started writing occasional pieces of flash fiction as a way of presenting philosophical issues. In 2013, the SF writer R. Scott Bakker, a sometimes-reader of my blog, noticed one and wrote an alternative ending. We decided to work it up for publication, and Nature accepted it. The experience was positive, so I decided trying to write up other stories for publication in SF magazines, and have had some success placing stories in good venues.
Can you clarify what you mean with "there [is] a lot of philosophical potential in recent SF?”
Okay, here’s a really bad cartoon history of English-language science fiction in the past eighty years: In the beginning, it was about space exploration, alien invasions, and humanoid robots. In the 1980s and especially the 1990s and 2000s, it started more seriously exploring the range of possibilities that computation and neuroscience open up. Why limit oneself to the humanoid form and the biologically given brain? Why not duplicate oneself, live in a virtual environment, radically alter one’s own psychological or physical structure? These issues weren’t entirely ignored in the space-exploration-dominated era of science fiction; but in the wake of the cyberpunk revolution, in the works of Greg Egan, Neal Stephenson, Ted Chiang, and Charles Stross, and movies like The Matrix, these kinds of issues really started to take center stage in science fiction. Or rather, more cautiously, I should say that the power of science fiction to explore these issues in a philosophically interesting way came vividly to my own attention, as a then-casual reader of science fiction. Such work very clearly raises philosophical issues about personal identity and what are, and should be, our core values.
For example, in “Reasons to be Cheerful”, Diaspora, and Permutation City, Greg Egan’s characters are empowered to directly determine what it is they value. They can tweak their own parameters or desires in any way they please; and in the latter two books, the characters have vast amounts of time to live. Do you want to love woodworking for 10,000 subjective years? What do you want your sexual preferences, if any, to be? We are accustomed to being stuck with the values and personality we have, with only small and indirect sorts of control over our valuational systems. What happens or should happen when that constraint is removed?
The idea of creating duplicates is also nicely explored in recent science fiction – for example in Linda Nagata’s Bohr Maker and David Brin’s Kiln People. What if you could make a duplicate or partial duplicate of yourself who could then operate independently of you and choose whether or not to merge back into your personality? What could or should the values of such an entity be? How does that fit with philosophical conceptions of personal identity?
Although the issue of robot rights is a classic even in pre-cyberpunk science fiction, issues about the rights of artificial intelligences become more interesting and complex once one sets aside the paradigm of AI as either in conventionally embodied humanoid robots or in hyperintelligent computer boxes like HAL from Clarke’s 2001. One issue that really interests me – and which I think still has much more potential to be explored in science fiction – are the moral relations between beings in simulated worlds (“sims”) and the beings who run those sims, who have god-like power over the beings inside the simulated worlds. This is a central theme in some of my own work – for example in “Reinstalling Eden” (co-authored with R. Scott Bakker) and “Out of the Jar”. These issues connect with issues in theology (if we consider the sim-managers to be gods – as I think we should, taking the perspective of the sims), in animal rights and human enhancement, in the nature and value of personhood, and also in connection with the fundamental ethical question of what kind of world we aspire to live in. Consideration of these possibilities science-fictionally can, I think, help us break out of our usual philosophical ruts. Even if none of the envisioned possibilities ever comes to be actual in the future, a trip through these hypotheticals will give us new insight into these classic philosophical topics. It will give us at least, I think, a vivid appreciation of the contingency of our way of life, perhaps akin to the contingency we can feel when we read good cultural anthropology.
You mention that exploring questions such as the moral responsibilities of simulators toward their simulated beings can help us break out of our philosophical ruts - how do you think it is that fiction can accomplish this in a way non-fiction cannot? Differently put: how can speculative fiction do things differently from speculative philosophical nonfiction prose, especially if it helps itself to brief thought experiments?
First, although writers of philosophical nonfiction could perhaps in principle think through the full range of far-out science-fictional possibilities that are philosophically interesting, as a matter of fact they do not do so in a way that comes close to exhausting the interesting possibilities. Derek Parfit on personal identity, with his fission and fusion, his duplication and teleportation, and Nick Bostrom on the catastrophic possibilities of superintelligence – that’s really interesting stuff! But there’s not so much of that type of thinking in nonfiction philosophy that we academic philosophers can say, “No point in reading speculative fiction! Philosophers have already thought of all the philosophically interesting cases!” Just as a matter of contingent fact, there’s a lot of interesting science fiction out there that covers cases that philosophers haven’t really explored in depth. Reading science fiction, then, one can find cases that open up issues that philosophers haven’t really turned much attention to yet. It’s a fresh source of input.
One example from my own work is group consciousness. Group consciousness (as opposed to group intentionality) has received really very limited attention in philosophy in the last several decades. I know this because I did a pretty thorough search of the literature for my recently published paper on the topic (“If Materialism Is True, the United States Is Probably Conscious” in Philosophical Studies 2015). But of course it has often been discussed in science fiction! If I hadn’t read Vernor Vinge’s fascinating portrayal of group minds in A Fire upon the Deep, I don’t know that group consciousness would have occurred to me as a natural direction to extend my own work in philosophy of mind. But now that I’ve done it, the issue seems to be opening up. I’ve already read two pieces of nonfiction philosophy written in reaction to my essay on group consciousness, as well as lots of lively oral discussion. I’m optimistic, then, that this is a specific instance in which my reading science fiction has helped nurture ideas that philosophers could have been thinking about nonfictionally, but in fact were not thinking about (much). A case of cross-pollination from science fiction, maybe.
Second, fiction engages the imagination and the emotions in a way that standard philosophical expository prose tends not to do. (Some of the best philosophical writers – Dennett and Nietzsche, for example – can engage the imagination and emotions even in argumentative prose, but this is a rare ability!) I think there are both epistemic gains and epistemic risks in engaging the imagination and emotions in considering philosophical questions. Among the risks are bias and having one’s judgment excessively swayed by inessential specifics of the case as you happen to fill it out in imagination. On the other hand, the human mind is remarkably poor at thinking in pure abstractions. Really, we only start to get a grip on general principles (“knowledge is justified true belief”, “act on that maxim that you can at the same time will to be a universal law”) when we begin to think through concrete examples in imagination. We wouldn’t need to pepper our essays with paragraph-long examples if imagination were not a useful tool in philosophical reflection.
In this way of thinking, one might consider the paragraph-long philosophical example as a kind of midpoint between purely abstract propositions on the one hand and full-length stories, movies, or novels on the other hand. We would probably do well to bring the full diversity of modes of thinking to philosophical problems: abstract thought, sketchy paragraph-long examples, and full-blooded fictions. Why leave any one of these tools aside?
Furthermore, emotion seems especially valuable in thinking through moral issues. It’s one thing to think about the propriety of killing one person to save five others, if one does so in an emotionally disengaged way in the context of a three-sentence scenario involving a runaway boxcar. But one might have very different thoughts about the case if one was confronted with it in a full-length movie, with the protagonist on the tracks! I don’t think we should base our philosophy wholly on our reactions in such emotionally engaged cases. Sometimes, you need to set aside such emotions and be colder in one’s evaluations. But only sometimes, I think! At other times, emotional engagement with philosophical issues is probably exactly what you want. At a minimum, you should have some sense of what your emotional reaction might be a real-world case, if you were confronted with it; and fiction can give you something closer to a sense of that than can the paragraph-long example.
Ordinary “literary” fiction is great at helping you imagine and emotionally consider the ranges of cases that occur in the ordinary run of human life. Speculative fiction I would define as fiction that considers possibilities outside the ordinary run of human life – or human life in its current and historically-known manifestations. Philosophers should be interested in engaging emotionally and imaginatively with both actual and hypothetical, and thus with both types of literature.
One type of hypothetical case that especially interests me, as I mentioned in my response to the previous question, is the moral status of artificially intelligent entities, like robots and sims. Like group consciousness – but even more so – this is an issue that has received extensive attention in science fiction but very limited attention in nonfiction philosophy. But it seems worth thinking about! And it seems worth thinking about not only in an abstract way, but in a way that engages our emotions and imaginations. How would we really react to a robot who seems to be conscious? What might the threats and opportunities be, both moral and prudential, if we created such beings? Science fiction is much farther advanced on this issue, right now, than is philosophy.
Thank you to Eric Schwitzgebel for participating!