banner



What Is Chalmers Arguement In Favor Of Uploading Consciousness

The brain is the engine of reason and the seat of the soul. It is the substrate in which our minds reside. The problem is that this substrate is prone to decay. Eventually, our brains will cease to function and along with them and then too will our minds. This will result in our deaths. Petty wonder and then that the prospect of transferring (or uploading) our minds to a more than robust, technologically advanced, substrate has proved then attractive to futurists and transhumanists.

Merely is it really feasible? This is a question I've looked at many times before, but the contempo book Intelligence Unbound: The Hereafter of Uploaded and Machine Minds offers peradventure the most detailed, sophisticated and thoughtful treatment of the topic. It is a drove of essays, from a various array of authors, probing the primal issues from several dissimilar perspectives. I highly recommend information technology.

Within its pages you will find a pair of essays debating the philosophical aspects of mind-uploading (y'all'll notice others too, but I want to zone-in on this pair because one is a directly response to the other). The kickoff of those essays comes from David Chalmers and is broadly optimistic about the prospect of mind-uploading. The second of them comes from Massimo Pigliucci and is much less enthusiastic. In this ii-part series of posts, I want to examine the fence betwixt Chalmers and Pigliucci. I get-go by looking at Chalmers'due south contribution.

ane. Methods of Mind-Uploading and the Issues for Contend
Chalmers starts his essay by considering the different possible methods of mind-uploading. This is useful because information technology helps to clarify — to some extent — exactly what we are debating. He identifies iii different methods (note: in a previous post I looked at work from Sim Bamford suggesting that there were more than methods of uploading, just we can ignore those other possibilities for now):

Destructive Uploading: As the name suggests, this is a method of mind-uploading that involves the destruction of the original (biological) mind. An example would be uploading via serial sectioning. The brain is frozen and its structure is analyzed layer past layer. From this analysis, 1 builds upward a detailed map of the connections betwixt neurons (and other glial cells if necessary). This information is then used to build a functional computational model of the brain.

Gradual Uploading: This is a method of listen-uploading in which the original copy is gradually replaced by functionally equivalent components. I example of this would be nanotransfer. Nanotechnology devices could be inserted into the brain and fastened to individual neurons (and other relevant cells if necessary). They could and so learn how those cells work and use this data to simulate the behaviour of the neuron. This would lead to the construction of a functional analogue of the original neuron. In one case the construction is complete, the original neuron can be destroyed and the functional analogue can accept its place. This procedure can exist repeated for every neuron, until a consummate copy of the original encephalon is constructed.

Nondestructive Uploading: This is a method of mind-uploading in which the original copy is retained. Some form of nanotechnology brain-scanning would exist needed for this. This would build upwardly a dynamical map of current encephalon part — without disrupting or destroying it — and use that dynamical map to construct a functional counterpart.

Whether these forms of uploading are actually technologically feasible is anyone's guess. They are certainly not completely implausible. I can certainly imagine a model of the brain being built from a highly detailed scan and analysis. It might take a huge amount of computational power and technical resources, but it seems within the realm of technological possibility. The deeper question is whether our minds would really survive the process. This is where the philosophical argue kicks-in.

In that location are, in fact, 2 philosophical issues to fence:

The Consciousness Issue: Would the uploaded mind be conscious? Would it feel the world in a roughly like manner to how we now experience the globe?

The Identity/Survival Issue: Assuming it is conscious, would information technology exist our consciousness (our identity) that survives the uploading procedure? Would our identities be preserved?

The ii issues are connected. Consciousness is valuable to us. Indeed, it is arguably the most valuable thing of all: it is what allows united states to enjoy our interactions with the world, and it is what confers moral condition upon united states. If consciousness was not preserved by the heed-uploading process, information technology is difficult to see why we would intendance. And then consciousness is a necessary condition for a valuable form of mind-uploading. That does non, however, brand it a sufficient condition. Subsequently all, two beings tin be witting without sharing any important connection (you are conscious, and I am conscious, simply your consciousness is not valuable to me in the aforementioned manner that it is valuable to you). What we really want to preserve through uploading is our private consciousnesses. That is to say: the stream of conscious experiences that constitutes our identity. Merely would this be preserved?

These two issues form the centre of the Chalmers-Pigliucci debate.

2. Would consciousness survive the uploading process?
And then let'south start by looking at Chalmers's have on the consciousness upshot. Chalmers is famously i of the new-Mysterians, a group of philosophers who uncertainty our ability to take a fully scientific theory of consciousness. Indeed, he coined the term "The Hard Problem" of consciousness to draw the difficulty we have in accounting for the start-personal quality of conscious feel. Given his scepticism, one might have idea he'd have his doubts about the possibility of creating a conscious upload. But he actually thinks we have reason to exist optimistic.

He notes that there are ii leading gimmicky views well-nigh the nature of consciousness (setting non-naturalist theories to the side). The first — which he calls the biological view — holds that consciousness is only instantiated in a particular kind of biological system: no nonbiological system is likely to exist conscious. The second — which he (and everyone else) calls the functionalist view — holds that consciousness is instantiated in whatsoever organisation with the correct causal construction and causal roles. The important thing is that the functionalist view allows for consciousness to be substrate contained, whereas the biological view does non. Substrate independence is necessary if an upload is going to be conscious.

So which of these views is right? Chalmers favours the functionalist view and he has a somewhat elaborate argument for this. The argument starts with a thought experiment. The thought experiment comes in ii stages. The showtime stage asks us to imagine a "perfect upload of a brain inside a reckoner" (p. 105), past which is meant a model of the encephalon in which every relevant component of a biological brain has a functional analogue within the computer. This computer-encephalon is as well hooked upwardly to the external earth through the same kinds of sensory input-output channels. The result is a computer model that is a functional isomorph of a real brain. Would we doubt that such a system was witting if the existent brain was conscious?

Maybe. That brings us to the second stage of the thought experiment. Now, we are asked to imagine the construction of a functional isomorph through gradual uploading:

Here we upload different components of the brain one past one, over time. This might involve gradual replacement of entire brain areas with computational circuits, or it might involve uploading neurons one at a fourth dimension. The components might be replaced with silicon circuits in their original location…It might take place over months or years or over hours.

If a gradual uploading process is executed correctly, each new component will perfectly emulate the component it replaces, and will interact with both biological and nonbiological components around it in only the same way that the previous component did. So the organisation will behave in exactly the same way that it would take without the uploading.
(Intelligence Unbound pp. 105-106)

Critical to this exercise in imagination is the fact that the process results in a functional isomorph and that you lot can make the process exceptionally gradual, both in terms of the time taken and the size of the units existence replaced.

With the edifice blocks in identify, nosotros now enquire ourselves the critical question: if we were undergoing this process of gradual replacement, what would happen to our conscious feel? There are three possibilities. Either it would all of a sudden stop, or it would gradually fade out, or it would exist retained. The first two possibilities are consistent with the biological view of consciousness; the last is not. It is only consistent with the functional view. Chalmers's argument is that the terminal possibility is the about plausible.

In other words, he defends the following argument:

  • (i) If the parts of our encephalon are gradually replaced by functional isomorphic component parts, our conscious feel will either: (a) be all of a sudden lost; (b) gradually fadeout; or © be retained throughout.
  • (ii) Sudden loss and gradual fadeout are non plausible; retention is.
  • (iii) Therefore, our conscious experience is likely to exist retained throughout the process of gradual replacement.
  • (4) Retention of witting experience is only compatible with the functionalist view.
  • (5) Therefore, the functionalist view is like to exist right; and preservation of consciousness via mind-uploading is plausible.

Chalmers adds some detail to the conclusion, which we'll talk about in a minute. The crucial matter for now is to focus on the key premise, number (2). What reason exercise we have for thinking that retention is the only plausible pick?

With regard to sudden loss, Chalmers makes a unproblematic statement. If we were to suppose, say, that the replacement of the 50,000th neuron led to the sudden loss of consciousness, we could break downwards the transition betoken into e'er more gradual steps. So instead of replacing the 50,000th neuron in ane go, nosotros could split the neuron itself into ten sub-components and supplant them gradually and individually. Are we to suppose that consciousness would all of a sudden be lost in this process? If so, and so suspension down those sub-components into other sub-components and start replacing them gradually. The point is that eventually nosotros volition accomplish some limit (e.chiliad. when nosotros are replacing the neuron molecule by molecule) where it is implausible to suppose that there will be a sudden loss of consciousness (unless you believe that one molecule makes a difference to consciousness: a conventionalities that is refuted by reality since we lose brain cells all the time without thereby losing consciousness). This casts the whole notion of sudden loss into doubt.

With regard to gradual fadeout, the statement is more subtle. Remember information technology is disquisitional to Chalmers' thought experiment that the upload is functionally isomorphic to the original brain: for every encephalon state that used to exist associated with witting experience there will be a functionally equivalent state in the uploaded version. If nosotros accept gradual fadeout, we would have to suppose that despite this equivalence, at that place is a gradual loss of sure conscious experiences (due east.yard. the ability to experience black and white, or sure loftier-pitched sounds etc.) despite the presence of functionally equivalent states. Chalmers' argues that this is implausible because it asks us to imagine a system that is deeply out of touch with its ain conscious experiences. I detect this slightly unsatisfactory insofar as information technology may presuppose the functionalist view that Chalmers is trying to defend.

Simply, in any upshot, Chalmers suggests that the procedure of fractional uploading will convince people that retentiveness of consciousness is likely. Once we take friends and family unit who take had parts of their brains replaced, and who seem to retain conscious experience (or, at least, all outward signs of having witting feel), we are likely to accept that consciousness is preserved. After all, I don't doubt that people with cochlear or retinal implants have some sort of aural or visual experiences. Why should I dubiety it if other parts of the brain are replaced by functional equivalents?

Chalmers concludes with the suggestion that all of this points to the likelihood of consciousness being an organizational invariant. What he ways by this is that systems with the exact same patterns of causal organisation are likely to have the same states of consciousness, no matter what those systems are made of.

I'll agree off on the major criticisms until part two, since this is the part of the argument about which Pigliucci has the near to say. Nevertheless, I will brand one annotate. I'one thousand inclined towards functionalism myself, but information technology seems to me that in crafting the thought experiment that supports his argument, Chalmers helps himself to a pretty colossal assumption. He assumes that we know (or can imagine) what information technology takes to create a "perfect" functional analogue of a conscious system like the brain. But, of course, we don't know really know what it takes. Whatever functional model is likely to simplify and abstruse from the messy biological details. The trouble is knowing which of those details is critical for ensuring functional equivalence. We can create functional models of the heart considering all the critical elements of the heart are determinable from a third person perspective (i.due east. we know what is necessary to make the blood pump from a third person perspective). That doesn't seem to be the case with consciousness. In fact, that's what Chalmers'southward Hard Trouble is supposed to highlight.

3. Volition our identities be preserved? Will we survive the process?
Allow's assume Chalmers is correct to be optimistic almost consciousness. Does that mean he is correct to exist optimistic about identity/survival? Will the uploaded mind exist the same as we are? Will it share our identity? Chalmers has more doubts about this, but over again he sees some reason to be optimistic.

He starts by noting that there are iii unlike philosophical approaches to personal identity. The starting time is biologism (or animalism), which holds that preservation of 1's identity depends on the preservation of the biological organism that 1 is. The 2nd is psychological continuity, which holds that preservation of one's identity depends on maintaining threads of overlapping psychological states (memories, beliefs, desires etc.). The third, slightly more unusual, is Robert Nozick's "closest continuer" theory, which holds that preservation of identity depends on the existence of a closely-related subsequent entity (where "closeness" is defined in various ways).

Chalmers then defends two unlike arguments. The kickoff gives some reason to be pessimistic about survival, at least in the instance of subversive and nondestructive forms of uploading. The second gives some reason to be optimistic, at least in the case of gradual uploading. The end result is a qualified optimism about gradual uploading.

Let's start with the pessimistic argument. Once again, information technology involves a thought experiment. Imagine a man named Dave. Suppose that one solar day Dave undergoes a nondestructive uploading process. A copy of his brain is fabricated and uploaded to a computer, but the biological brain continues to exist. There are, thus, 2 Daves: BioDave and DigiDave. It seems natural to suppose that BioDave is the original, and his identity is preserved in this original biological form; and information technology is equally natural to suppose that DigiDave is simply a branchline re-create. In other words, it seems natural to suppose that BioDave and DigiDave have separate identities.

But at present suppose nosotros imagine the same scenario, only this time the original biological copy is destroyed. Do we have any reason to change our view almost identity and survival? Surely not. The only difference this time round is that BioDave is destroyed. DigiDave is the same as he was in the original thought experiment. That suggests the post-obit argument (numbering follows on from the previous argument diagram):

  • (nine) In nondestructive uploading, DigiDave is not identical to Dave.
  • (10) If in nondestructive uploading, DigiDave is not identical to Dave, then in destructive uploading, DigiDave is not identical to Dave.
  • (eleven) In subversive uploading, DigiDave is not identical to Dave.

This looks pretty sound to me. And as we shall see in part two, Pigliucci takes a similar view. Nevertheless, there are two possible ways to escape the conclusion. The first would be to deny premise (ii) by adopting the closest continuer theory of personal identity. The thought then would be that in destructive (only non not-subversive) uploading DigiDave is the closest continuer and hence the vessel in which identity is preserved. I think this just reveals how odd the closest continuer theory really is.

The other option would be to argue that this is a fission case. It is a scenario in which one original identity fissions into two subsequent identities. The concept of fissioning identities was originally discussed by Derek Parfit in the instance of severing and transplanting of brain hemispheres. In the brain hemisphere instance, some function of the original person lives on in two separate forms. Neither is strictly identical to the original, only they do stand up in "relation R" to the original, and that relation might be what is critical to survival. Information technology is more hard to say that nondestructive uploading involves fissioning. But it might be the best bet for the optimist. The statement then would be that the original Dave survives in two separate forms (BioDave and DigiDave), each of which stands in relation R to him. But I'd have to say this is quite a stretch, given that BioDave isn't really some new entity. He's only the original Dave with a new name. The new proper name is unlikely to make an ontological deviation.

Let's at present plow our attention to the optimistic argument. This i requires us to imagine a gradual uploading process. Fortunately, we've done this already so you know the drill: imagine that the subcomponents of the brain are replaced gradually (say 1% at a fourth dimension), over a menses of several years. Information technology seems highly probable that each stride in the replacement procedure preserves identity with the previous pace, which in turn suggests that identity is preserved one time the procedure is complete.

To country this is in more than formal terms:

  • (14) For all n < 100, Daven+i is identical to Davedue north.
  • (15) If for all n < 100, Daven+1 is identical to Daven, so Dave100 is identical to Dave.
  • (sixteen) Therefore, Dave100 is identical to Dave.

If you're not convinced by this 1%-at-a-fourth dimension version of the argument, you can adjust information technology until it becomes more persuasive. In other words, setting aside certain extreme physical and temporal limits, you tin brand the process of gradual replacement every bit tiresome equally y'all like. Surely there is some signal at which the caste of change between the steps becomes so minimal that identity is clearly beingness preserved? If not, and then how do y'all explain the fact that our identities are being preserved every bit our trunk cells replace themselves over fourth dimension? Maybe y'all explicate it by appealing to the biological nature of the replacement.  Only if we take functionally equivalent technological analogues it's hard to see where the trouble is.

Chalmers adds other versions of this argument. These involve speeding up the process of replacement. His intuition is that if identity is preserved over the class of a really gradual replacement, then it may well exist preserved over a much shorter menses of replacement as well, for example one that takes a few hours or a few minutes. That said, there may exist important differences when the process is sped up. It may be that also much alter takes place too quickly and the new components fail to smoothly integrate with the old ones. The issue is a break in the strands of continuity that are necessary for identity-preservation. I have to say I would certainly be less enthusiastic virtually a fast replacement. I would like the fourth dimension to run across whether my identity is being preserved following each replacement.

4. Decision
That brings united states to the end of Chalmers' contribution to the fence. He says more in his essay, particularly nearly cryopreservation, and the possible legal and social implications of uploading. Just there is no sense in addressing those topics here. Chalmers doesn't develop his thoughts at whatsoever great length and Pigliucci wisely ignores them in his reply. We'll be discussing Pigliucci's respond in part 2.

Source: https://philosophicaldisquisitions.blogspot.com/2014/09/chalmers-vs-pigliucci-on-philosophy-of.html

Posted by: lopezforeence.blogspot.com

0 Response to "What Is Chalmers Arguement In Favor Of Uploading Consciousness"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel