The Splintered Mind's Journal
 
[Most Recent Entries] [Calendar View] [Friends]

Below are the 5 most recent journal entries recorded in The Splintered Mind's LiveJournal:

    Friday, November 15th, 2013
    6:11 pm
    Skepticism, Godzilla, and the Artificial Computerized Many-Branching You
    Nick Bostrom has argued that we might be sims. A technologically advanced society might use hugely powerful computers, he says, to run "ancestor simulations" containing actually conscious people who think they are living, say, on Earth in the early 21st century but who in fact live entirely inside an advanced computational system. David Chalmers has considered a similar possibility in his well-known commentary on the movie The Matrix.

    Neither Bostrom nor Chalmers is inclined to draw skeptical conclusions from this possibility. If we are living in a giant sim, they suggest, that sim is simply our reality: All the people we know still exist (they're sims just like us) and the objects we interact with still exist (fundamentally constructed from computational resources, but still predictable, manipulable, interactive with other such objects, and experienced by us in all their sensory glory). However, it seems quite possible to me that if we are living in a sim, it might well be a small sim -- one run by a child, say, for entertainment. We might live for three hours' time on a game clock, existing mainly as citizens who will give entertaining reactions when, to their surprise, Godzilla tromps through. Or it might be just me and my computer and my room, in an hour-long sim run by a scientist interested in human cognition about philosophical problems.

    Bostrom has responded that to really evaluate the case we need a better sense of what are more likely vs. less likely simulation scenarios. One large-sim-friendly thought is this: Maybe the most efficient way to create simulated people is to evolve up a large scale society over a long period of (sim-clock) time. Another is this: Maybe we should expect a technologically advanced society capable of running sims to have enforceable ethical standards against running small sims that contain actually conscious people.

    However, I don't see compelling reason to accept such (relatively) comfortable thoughts. Consider the possibility I will call the Many-Branching Sim.

    Suppose it turns out the best way to create actually conscious simulated people is to run a whole simulated universe forward billions of years (sim-years on the simulation clock) from a Big Bang, or millions of years on an Earth plus stars, or thousands of years from the formation of human agriculture -- a large-sim scenario. And suppose that some group of researchers actually does this. Consider, now, a second group of researchers who also want to host a society of simulated people. It seems they have a choice: Either they could run a new sim from the ground up, starting at the beginning and clocking forward, or they could take a snapshot of one stage of the first group's sim and make a copy. Which would be more efficient? It's not clear: It depends on how easy it is to take and store a snapshot and implement it on another device. But on the face of it, I don't see why we ought to suppose that copying would take more time or more computational resources than evolving a sim up from ground.

    Consider the 21st century game Sim City. If you want a bustling metropolis, you can either grow one from scratch or you can use one of the many copies created by the programmers or users. Or you could grow one from scratch and then save stages of it on your computer, shutting the thing down when things don't go the way you like and starting again from a save point; or you could make copied variants of the same city that grow in different directions.

    The Many-Branching Sim scenario is the possibility that there is a root sim that is large and stable, starting from some point in the deep past, and then this root sim was copied into one or more branch sims that start from a save point. If there are many branch sims, it might be that I am in one of them, rather than in a root sim or a non-branching sim. Maybe one company made the root sim for Earth, took a snapshot in November 2013 on the sim clock, then sold thousands or millions of copies to researchers and computer gamers who now run short-term branch sims for whatever purposes they might have. In such a scenario, the future of the branch sim in which I am living might be rather short -- a few minutes or hours or years. The past might be conceptualized either as short or as long, depending on whether the past in the root sim counts as "this world's" past.

    Issues of personal identity arise. If the snapshot of the root sim was taken at root sim clock time November 1, 2013, then the root sim contains an "Eric Schwitzgebel" who was 45 years old at the time. The branch sims would also contain many other "Eric Schwitzgebels" developing forward from that point, of which I would be one. How should I think of my relationship to those other Erics? Should I take comfort in the fact that some of them will continue on to full and interesting lives (perhaps of very different sorts) even if most of them, including probably this particular instantiation of me, now in a hotel in New York City, will soon be stopped and deleted? Or to the extent I am interested in my own future rather than merely the future of people similar to me, should I be concerned primarily about what is happening in this particular branch sim? As Godzilla steps down on me, shall I try to take comfort in the possibility that the kid running the show will delete this copy of the sim after he has enjoyed viewing the rampage, then restart from a save point with New York intact? Or would deleting this branch be the destruction of my whole world?

    10:39 am
    Being Two People at Once, with the Help of Linda Nagata
    In the world of Linda Nagata's Nanotech Succession, you can be two people at once. And whether you are in fact two people at once, I'd suggest, depends on the attitude each part takes toward the splitting-fusing process.

    "Two people at once" isn't how Nagata puts it. In her terminology, one being, the original person, continues in standard embodied form, while another being, a "ghost" -- inhabits some other location, typically someone else's "atrium". Suppose you want to have an intimate conversation long-distance. In Nagata's world, you can do it like this: Create a duplicate of your entire psychology (memories, personality traits, etc. -- for the sake of argument, let's allow that this can be done) and transfer that information to someone else. The recipient then implements your psychology in a dedicated processing space, her atrium. At the same time, your physical appearance is overlaid upon the recipient's sensory inputs. To her (though to no one else around) it will look like you are in the room. The person hosting you in her atrium will then interact with you, for example by saying "Hi, long time no see!" Her speech will be received as inputs to the virtual ghost-you in her atrium, and this ghost-you will react in just the same way you would react, for example by saying "You haven't aged a bit!" and stepping forward for a hug. Your host will then experience that speech overlaid on her auditory inputs, your bodily movement overlaid on her visual inputs, and the warmth of your hug overlaid on her tactile inputs. She will react accordingly, and so forth.

    The ghost in the atrium will, of course, consciously experience all this (no Searlean skepticism about conscious AI here). When the conversation is over, the atrium will be emptied and the full memory of these experiences will be messaged back to the original you. The original you -- which meanwhile has been having its own stream of experiences -- will accept the received memories as autobiographical. The newly re-merged you, on Earth, will remember that conversation you had on Mars, which occurred on the same day you were also busy doing lots of other things on Earth.

    If you know the personal identity literature in philosophy, you might think of instantiating the ghost as a "fission" case -- a case in which one person splits into two different people, similar to the case of having each hemisphere of your brain is transplanted separately into a different body, or the case of stepping into a transporter on Earth and having copies of you emerge simultaneously on Mars and Venus to go their separate ways ever after. Philosophers usually suppose that such fissions produce two distinct identities.

    The Nagata case is different. You fission, and both of the resulting fission products know they are going to merge back together again; and then once they do merge, both strands of the history are regarded equally as part of your autobiography. The merged entity regards itself as being responsible for the actions of the split-off ghost -- can be embarrassed by its gaffes, held to its promises, and prosecuted for its crimes, and it will act out the ghost's decisions without needing to rethink them.

    Contrast assimilation into the Borg of the Star Trek universe. The Borg, a large group entity, absorbs the memories of various assimilated beings (like individual human beings). But the Borg treats the personal history of the assimilated being non-autobiographically -- for example without accepting responsibility for the assimilated entity's past actions and plans.

    What makes the difference between an identity-preserving fission-and-merge and an identity-breaking fission-and-merge is, I propose, the entities' implicit and explicit attitudes about the merge. If pre-fission I think "I am going to be Eric Schwitzgebel, in two places", and then in the fissioned state I think "I am here but another copy of me is also running elsewhere", and then after fusion I think "Both of those Eric Schwitzgebels are equally part of my own past" -- and if I also implicitly accept all this, e.g., by not feeling compelled to rethink one Eric Schwitzgebel's decisions more than the other's -- and perhaps especially if the rest of society shares my view of these matters, then I have been one entity in two places.

    To see that this is really about the content of the relevant attitudes and not about, say, the kind of continuity of memory, values, and personality usually emphasized in psychological approaches to personal identity, consider what would happen if I had a very different attitude toward ghosts. If I saw the ghost as a mere slave distinct from me, then during the split my ghost might be thinking "damn, I'm only a ghost and my life will expire at the end of this conversation"; and after the merge, I'll tend to think of my ghost's behaviors as not really having been my own, despite my memories of those behaviors from a first-person point of view. The ghost will not bothered having made decisions or promises intending to bind me, knowing I would not accept them as my own if he did. And I'll be embarrassed by the ghost's behavior not in the same way I would be embarrassed by my own behavior but instead in something like the way I would be embarrassed by a child's or employee's behavior -- especially, perhaps, if the ghost does something that I wouldn't have done in light of its knowledge that, being merely a ghost, it would imminently die. The metaphysics of identity will thus turn upon the participant beings' attitudes about what preserves identity.

    10:39 am
    On the Intrinsic Value of Moral Reflection
    Here's a hypothetical, not too far removed from reality: What if I discovered, to my satisfaction, that moral reflection -- the kind of intellectual thinking about ethical issues that is near the center of moral philosophy -- tended to lead people toward less true (or, if you prefer, more noxious) moral views than they started with? And what if, because of that, it tended also to lead people toward somewhat worse moral behavior overall? And suppose I saw no reason to think myself likely to be an exception to that tendency. Should I abandon moral reflection?

    What is the point of moral reflection?

    If the point is to discover what is really morally the case -- well, there's reason to doubt that philosophical styles of moral reflection are highly effective at achieving that goal. Philosophers' moral theories are often simplistic, problematic, totalizing -- too rigid in some places, too flexible in others, recruitable for clever justifications of noxious behavior, from sexual harassment to Nazism to sadistic parenting choices. Uncle Irv, who never read Kant or Mill and has little patience for the sorts of intellectual exercises we philosophers love, might have much better moral knowledge than most philosophers; and you and I might have had better moral knowledge than we do, had we shared his skepticism about philosophy.

    If the point of philosophical moral reflection is to transform oneself into a morally better person -- well, there are reasons to doubt it has that effect, too.

    But I would not give it up. I would not give it up, even at some moderate cost to my moral knowledge and moral behavior. Uncle Irv is missing something. And a world of Uncle Irvs would be a world vastly worse than this world, in a way I care about -- much as, perhaps, a world without metaphysical speculation would be worse than this world, even if metaphysical speculation is mostly bunk, or a world without bad art would be worse than this world or a world of a hundred billion contented cows would be worse than this world.

    If I think about what I want in a world, I want people struggling to think through morality, even if they mostly fail -- even if that struggle rather more often brings them down than up.

    10:39 am
    An Argument That the Ideal Jerk Must Remain Ignorant of His Jerkitude
    As you might know, I'm working on a theory of jerks. Here's the central idea a nutshell:
    The jerk is someone who culpably fails to respect the perspectives of other people around him, treating them as tools to be manipulated or idiots to be dealt with, rather than as moral and epistemic peers.
    The characteristic phenomenology of the jerk is "I'm important and I'm surrounded by idiots!" To the jerk, it's a felt injustice that he must wait in the post-office line like anyone else. To the jerk, the flight attendant asking him to hang up his phone is a fool or a nobody unjustifiably interfering with his business. Students and employees are lazy complainers. Low-level staff failed to achieve meaningful careers through their own incompetence. (If the jerk himself is in a low-level position, it's either a rung on the way up or the result of injustices against him.)

    My thought today is: It is partly constitutive of being a jerk that the jerk lacks moral self-knowledge of his jerkitude. Part of what it is to fail to respect the perspectives of others around you is to fail to see your dismissive attitude toward them as morally inappropriate. The person who disregards the moral and intellectual perspectives of others, if he also acutely feels the wrongness of doing so -- well, by that very token, he exhibits some non-trivial degree of respect for the perspectives of others. He is not the picture-perfect jerk.

    It is possible for the picture-perfect jerk to acknowledge, in a superficial way, that he is a jerk. "So what, yeah, I'm a jerk," he might say. As long as this label carries no real sting of self-disapprobation, the jerk's moral self-ignorance remains. Maybe he thinks the world is a world of jerks and suckers and he is only claiming his own. Or maybe he superficially accepts the label "jerk", without accepting the full moral loading upon it, as a useful strategy for silencing criticism. It is exactly contrary to the nature of the jerk to sympathetically imagine moral criticism for his jerkitude, feeling shame as a result.

    Not all moral vices are like this. The coward might be loathe to confront her cowardice and might be motivated to self-flattering rationalization, but it is not intrinsic to cowardice that one fails fully to appreciate one’s cowardice. Similarly for intemperance, cruelty, greed, dishonesty. One can be painfully ashamed of one’s dishonesty and resolve to be more honest in the future; and this resolution might or might not affect how honest one in fact is. Resolving does not make it so. But the moment one painfully realizes one’s jerkitude, one already, in that very moment and for that very reason, deviates from the profile of the ideal jerk.

    There's an interesting instability here: Genuinely worrying about its being so helps to make it not so; but then if you take comfort in that fact and cease worrying, you have undermined the basis of that comfort.

    Saturday, November 9th, 2013
    12:12 am
    Expert Disagreement as a Reason for Doubt about the Metaphysics of Mind (Or: David Chalmers Exists,
    Probably you have some opinions about the relative merit of different metaphysical positions about the mind, such as materialism vs. dualism vs. idealism vs. alternatives that reject all three options or seek to compromise among them. Of course, no matter what your position is, there are philosophers who will disagree with you -- philosophers whom you might normally regard as your intellectual peers or even your intellectual superiors in such matters – people, that is, who would seem to be at least as well-informed and intellectually capable as you are. What should you make of that fact?

    Normally, when experts disagree about some proposition, doubt about that proposition is the most reasonable response. Not always, though! Plausibly, one might disregard a group of experts if those experts are: (1.) a tiny minority; (2.) plainly much more biased than the remaining experts; (3.) much less well-informed or intelligent than the remaining experts; or (4.) committed to a view that is so obviously undeserving of credence that we can justifiably disregard anyone who espouses it. None of these four conditions seems to apply to dissent within the metaphysics of mind. (Maybe we could exclude a few minority positions for such reasons, but that will hardly resolve the issue.)

    Thomas Kelly (2005) has argued that you may disregard peer dissent when you have “thoroughly scrutinized the available evidence and arguments” on which your disagreeing peer’s judgment is based. But we cannot disregard peer disagreement in philosophy of mind on the grounds that this condition is met. The condition is not met! No philosopher has thoroughly scrutinized the evidence and arguments on which all of her disagreeing peers’ views are based. The field is too large. Some philosophers are more expert on the literature on a priori metaphysics, others on arguments in the history of philosophy, others on empirical issues; and these broad literatures further divide into subliteratures and sub-subliteratures with which philosophers are differently acquainted. You might be quite well informed overall. You’ve read Jackson’s (1986) Mary argument, for example, and some of the responses to it. You have an opinion. Maybe you have a favorite objection. But unless you are a serious Mary-ologist, you won’t have read all of the objections to that argument, nor all the arguments offered against taking your favorite objection seriously. You will have epistemic peers and probably epistemic superiors whose views are based on arguments which you have not even briefly examined, much less thoroughly scrutinized.

    Furthermore, epistemic peers, though overall similar in intellectual capacity, tend to differ in the exact profile of virtues they possess. Consequently, even assessing exactly the same evidence and arguments, convergence or divergence with one’s peers should still be epistemically relevant if the evidence and arguments are complicated enough that their thorough scrutiny challenges the upper range of human capacity across several intellectual virtues – a condition that the metaphysics of mind appears to meet. Some philosophers are more careful readers of opponents’ views, some more facile with complicated formal arguments, some more imaginative in constructing hypothetical scenarios, etc., and world-class intellectual virtue in any one of these respects can substantially improve the quality of one’s assessments of arguments in the metaphysics of mind. Every philosopher’s preferred metaphysical position is rejected by a substantial proportion of philosophers who are overall approximately as well informed and intellectually virtuous as she is, and who are also in some respects better informed and more intellectually virtuous than she is. Under these conditions, Kelly’s reasons for disregarding peer dissent do not apply, and a high degree of confidence in one’s position is epistemically unwarranted.

    Adam Elga (2007) has argued that you can discount peer disagreement if you reasonably regard the fact that the seeming-peer disagrees with you as evidence that, at least on that one narrow topic, that person is not in fact a full epistemic equal. Thus, a materialist might see anti-materialist philosophers of mind, simply by the virtue of their anti-materialism, as evincing less than a perfect level-headedness about the facts. This is not, I think, entirely unreasonable. But it's also fully consistent with still giving the fact of disagreement some weight as a source of doubt. And since your best philosophical opponents will exceed you in some of their intellectual virtues and know some facts and arguments, which they consider relevant or even decisive, which you have not fully considered, you ought to give the fact of dissent quite substantial weight as a source of doubt.

    Imagine an array of experts betting on a horse race: Some have seen some pieces of the horses’ behavior in the hours before the race, some have seen other pieces; some know some things about the horses’ performance in previous races, some know other things; some have a better eye for a horse’s mood, some have a better sense of the jockeys. You see Horse A as the most likely winner. If you learn that other experts with different, partly overlapping evidence and skill sets also favor Horse A, that should strengthen your confidence; if you learn that a substantial portion of those other experts favor B or C instead, that should lessen your confidence. This is so even if you don’t see all the experts quite as peers, and even if you treat an expert’s preference for B or C as grounds to wonder about her good judgment.

    Try this thought experiment. You are shut in a seminar room, required to defend your favorite metaphysics of mind for six hours (or six days, if you prefer) against the objections of Ned Block, David Chalmers, Daniel Dennett, and Saul Kripke. Just in case we aren’t now living in the golden age of metaphysics of mind, let’s add Kant, Leibniz, Hume, Zhu Xi, and Aristotle too. (First we’ll catch them up on recent developments.) If you don’t imagine yourself emerging triumphant, then you might want to acknowledge that the grounds for your favorite position might not really be very compelling.

    It is entirely possible to combine appropriate intellectual modesty with enthusiasm for a preferred view. Consider everyone’s favorite philosophy student: She vigorously champions her opinions, while at the same time being intellectually open and acknowledging the doubt that appropriately flows from her awareness that others think otherwise, despite those others being in some ways better informed and more capable than she is. Even the best professional philosophers still are such students, or should aspire to be, only in a larger classroom. So pick a favorite view! Distribute one’s credences differentially among the options. Suspect the most awesome philosophers of poor metaphysical judgment. But also: Acknowledge that you don't really know.

    [For more on disagreement in philosophy see here and here. This post is adapted from my paper in draft The Crazyist Metaphysics of Mind.]

The Splintered Mind   About Sviesta Ciba