Consciousness, Literature and the Arts

 

Archive

 

 

Volume 10 Number 1, April 2009

___________________________________________________________________

Philip K. Dick’s We Can Build You and Do Androids Dream of Electric Sheep?: The Effect of Limited Human Development

by

William S. Haney II

American University of Sharjah, UAE

 

            Philip K. Dick’s novel We Can Build You portrays a group of friends who build and sell a Multiplex Acoustical System of America (MASA), an electronic organ.  The business, however, has begun to decline, so they switch to building human simulacra, of which they complete two, Edwin M. Stanton and Abraham Lincoln.  Louis Rosen works with his partner Maury and Maury’s daughter Pris, who happens to be a psychopath, but Louis nevertheless falls in love with.  The only prospective buyer of the simulacra is a rapacious billionaire, Sam Barrows, whose plan for sending the simulacra on another planet could land Louis and Maury in jail.  The other problem with selling the simulacra is that Abraham Lincoln probably would not want to be sold.  Both simulacra are historical objects programmed with memory of the nineteenth century.  Edwin M. Stanton was Abraham Lincoln’s Secretary of War.  The person who built Stanton was Bundy, their engineer.  When Louis first meets Stanton, the simulacrum removed Maury’s hand from his body with a taciturn, grumpy expression.  Stanton “had sat up slowly and was in the process of methodically brushing itself off; it had a stern, vengeful look, now, as if it believed we had done it some harm, possibly sapped it and knocked it out, and it was just recovering” (13).  Stanton’s initial reaction to its creators suggests that, as a former Secretary of War, it doesn’t take kindly to being manipulated by humans.  Stanton, however, being a simulacrum, lacks not only consciousness but also self-consciousness.  Self-consciousness is defined as being able to conceive of oneself as having experiences.  Stanton may have been offended by its creators, but it does not conceive of itself as being offended.  Philip K. Dick, therefore, implies that simulacra may not be that easy to manage by humans, especially given that humans themselves are not using their full mental potential and thus cannot engineer their simulacra to become in tune with the laws of nature. 

            In The Quantum Self, Zohar develops an extensive and convincing argument for a state of pure consciousness on the basis of the paradigm of quantum physics.  She observes that the nonlocal quantum-level correlations of photons and other elementary particles through which “two events can be related across time in a way that ensures they will always act ‘in tune’ [beyond causal relationships] [. . .] is the basis of all quantum mechanical relationships, lending a very modern note of support to the pre-Socratic Greek notion of the ‘oneness of Being’” (1991, 21).  Zohar goes on to demonstrate how consciousness is continuous with other entities in the universe: “Consciousness and matter arise together from the same common source—in our terms, from the world of quantum phenomena” (1991, 73).  Our experience of this interconnectedness has not been cultivated in the West, which since Plato has emphasized the rational, analytic side of the brain instead of the intuitive.  Literature, on the other hand, is typically associated with intuition, which draws upon wisdom, imagination and creativity.  Dick’s novel, therefore, hints at the possibility that through our interconnectedness with the world, simulacra, cyborgs and bionic technology may ultimately weaken and destabilize human consciousness.

            While simulacra may have the ability to think based on how they are programmed, there’s a difference between thought and consciousness.  Although thought can be easily expressed in language, consciousness itself cannot.  When Maury asks Stanton what it thinks of the neighborhood they’re driving through, it responds, “Rather unsavory and unworthy” (15).  As Louis thinks to himself, “It made me angry to hear a mere fake criticizing genuine humans, especially a fine person like my dad.  And as to my brother—few radiation-mutants ever made the grade in the spinet and electronic organ industry outside of Chester Rosen” (15).  Louise, therefore, obviously experiences self-consciousness, while Stanton does not.  Louis, Maury and the other human characters have the ability to experience the “Cartesian Theatre,” the ability to be conscious of experiences, feelings, or thoughts when they appear on a metaphorical stage of the mind as displayed for the person’s self.  Because simulacra lack the “Cartesian Theatre,” they cannot reflect on their behavior.  Even when programmed by humans, simulacra don’t always behave amicably because the humans themselves who programmed them also have not fully developed themselves by using their full mental and emotional potential.  In addition, as Tadeusz Zawidzki notes in his book Dennett, “If we want to know what thought is, and what thoughts a person is thinking, we must first explain what consciousness and, especially, self-consciousness are.  Once we know this, we can examine the contents of consciousness, the ‘actors’ on the ‘stage’ of the Cartesian Theatre, to discover what thoughts are, and which thoughts a particular person is thinking” (2007, 9).  Simulacra like Stanton, however, lack both consciousness and self-consciousness and can therefore only have thoughts programmed into their “minds” by their creators. 

            On the one hand, the subject—caught within the boundaries of space, time and causality described by classical physics—consists only of the ordinary waking state of consciousness and is, therefore, never able to generate meaning from the depths of pure consciousness, which deconstruction regards as an illusion.  On the other hand, for reconstructive postmodernism, the subject, while forming part of a network of social relations, consists also of an unbounded level of consciousness in which knower and known, subject and object are united.  At this level, as we shall see, meaning does not predate language; rather, meaning and language are coterminous in their most unbounded states.  Although simulacra like Stanton and Lincoln—caught within space, time and causality with no access to the infinite—lack any of the levels of consciousness defined in Eastern philosophy, Pris and Louis have already begun to move in this directions due to the effect of collective consciousness being adversely affected by simulacra/cyborgs.  As suggested by science fiction, this outcome may also eventually happen to humans in the 21st-century through the globalization of bionic technology.

            Maury then explains to Louis how Stanton was constructed: “we collected the entire body of data extant pertaining to Stanton and had it transcribed down at UCLA into instruction punch-tape to be fed to the ruling monad that serves the simulacrum as a brain” (16).  Louis gives a negative reaction to this explanation.  Obviously Stanton has a computerized brain, indicating that it has no self-reflexivity.  The risk of creating robots without a brain or consciousness suggests that they will never emulate humans, even at the level of the limited mental potential from which they now function.  Louis’ dad thinks humans are more noble that Stanton because humans have the “advantage over the god-damn universe because it doesn’t know a thing of what’s going on” (17).  In fact, however, as John Fagan points out in his book Genetic Egnineering: The Hazards; Vedic Engineering: The Solutions (1995), the universe consists of an underlying field of natural intelligence, the unified field, which equates with pure consciousness.  Humans, however, and not the universe, are the ones who don’t know what’s going on unless they can experience the unified field of pure consciousness, the void of conceptions.  Louis’ dad ultimately intuits this when he tells Maury that “We must take care not to reach too high for maybe we will topple” (20).  Contrary to what Louis’ dad says, posthuman bionic technology has already begun to reach too high through a globalized use of genetic engineering. 

            Maury’s daughter Pris represents an example of what current genetic technology may apply to rectify her psychosis.  Having been confined to a ward of the Federal Bureau of Mental Health in Kansas City since her third year in high school, she would certainly be prone to genetic engineering in this day and age to correct her illness.  The more natural preventative approach discussed by Fagan, however, such as Ayur-Veda, would have provided a far more effective alternative through prevention.  Suggesting that Stanton has no independent mind of its own, Pris says, “It has the same facts that the original Edwin M. Stanton had.  We researched his life to the nth degree” (25).  Being a mentally disturbed teenage girl, Pris has no psychological qualities with which to enhance Stanton beyond the nth degree of research based on its original life.  She wants to sell Stanton to Barrows, who she’d like to work for and who she praises for his vision: “That’s what makes Barrows the great man he is. [. . .]  His vision.  Barrows Enterprises is working day and night” (27), trying to colonize Mars and send human beings there to buy his real estate.  But Barrows doesn’t see buying simulacra as a commercial venture.  Pris claims that she had the idea to build Stanton.  Louis feels, however, that compared to Pris, Stanton is warm and friendly (28).  As Louis reflects on Pris, he concludes, “if on some subconscious level she was aware of the massive deficiency in herself, the emptiness dead center, and was busy compensating for it” (30), that could be why she conceived of Stanton.  Maury tells Louis that after being classified as a schizophrenic for three years, Pris’s condition has improved.  Now she’s what they “call atypical development or latent borderline psychosis.  It can develop either into neurosis, the obsessional type, or it can flower into full schizophrenia, which it did in Pris’s case in her third year in high school” (31). 

            Although Pris wanted Stanton to look like Sam Barrows because she likes him and wants to work with him in his business of renting slums, Maury refused.  Pris considers Stanton a simulacrum that is “brilliantly original” (38), which is a misconception on her part given that nothing about it can lead it to contact the void of conceptions or the home of all the laws of nature, which Pris and the other characters also lack the ability to achieve.  When Maury tells Louis that they’ve built another simulacrum, Abraham Lincoln, Louis accuses him of losing control of his reason and then decides to see Doctor Horstowski.  He tells Horstowski that Pris played a cruel prank on him by transforming him into a simulacrum, saying that now he’s just a machine made out of switches and circuits like Lincoln and Stanton.  Horstowski tells him his real problem relates to the hostility he feels toward an eighteen-year-old girl with problems of her own.  The doctor offers him a drug to enhance his alertness and cheerfulness, and to reduce his feeling of menace toward Pris, but Louis refuses to take it.  The doctor tells him he does not have to associate with Pris, but Louis’s reaction to her arises from his beginning to fall in love with her and resenting the fact that she and her father are undermining MASA, their music company, by building simulacra.  Through his association with Maury and Pris, however, Louis starts to identify with the simulacra, and after telling the doctor about it he can’t get it out of his mind, as if he had become a zombie clone to his original self.

            Louis’s attraction to Pris, who remain psychotic to a certain extent, has led him toward the beginning stages of a psychotic experience himself.  This condition ultimately takes control of him by the end of the novel and lands him in a mental institution.  Dick thereby illustrates how susceptible human consciousness is to the quality of consciousness in our surroundings.  As Fagan points out, when consciousness settles down to a state of restful alertness through the group practice of TM and the TM-Sidhi program, a positive effect gets generated into the consciousness of the local community: “Because the quality of comprehension and clarity of thinking are determined by how awake, how conscious an individual is, these reproducible improvements in crime, sickness, and accident statistics indicate that group practice of Maharishi’s Transcendental Meditation and TM-Sidhi program is a powerful tool for reducing collective stress in society and for developing collective consciousness” (97).  If, however, the social context of consciousness operates on a hyperactive level contaminated by stress, then the opposite effect occurs, and each individual may experience a heightened level of stress and even illness.  What happens throughout this novel, therefore, stems largely from the influence of one person’s consciousness on that of another.  Louis’s obsession with Pris undermines his self-sufficiency and causes him to start feeling as if he were one of her constructs.  Going from one simulacrum to developing a second has the effect of undermining the collective consciousness of Louis, Maury, Pris and their friends in the sense that the simulacra lack consciousness, which can have the effect of reducing comprehension and even diverting humans from a knowledge-based approach to their lives based on access to the unified field of natural law.  The adverse impact in the novel of going from one to a second simulacrum suggests that the globalization of bionic technology, of trying to enhance human potential by artificial mean, will interfere with the expansion of consciousness and most likely heighten the stress level of the world as a whole.  Louis’s sense of becoming Pris’s simulacra also suggests that through the globalization of bionic technology, human could begin losing their sense of humanity.

            Louis asks Stanton how odd it must feel being in the wrong era made out of transistors and relays.  Stanton responds that when it considers “the brief span of my life, swallowed up in the eternity before and behind it, the small space that I fill, or even see, engulfed in the infinite immensity of spaces which I know not, and which know not me, I am afraid” (56).  Stanton also says it has read a volume on cybernetics, the science of which has helped it somewhat in understanding its perplexity.  And yet, it claims that at a certain point its “mind cannot fathom anything further” (57), which indicates the confined nature of its transistors and relays that constitute its mind.  Louis then asserts, “I claim there is no Edwin M. Stanton or Louis Rosen anymore.  There was once, but they’re dead.  We’re machines” (57).  He argues that Maury, Pris and Bundy built them, and are now working on an Abe Lincoln simulacrum.  Stanton says that Mr. Lincoln is dead too.  Louis realizes that Stanton as a simulacrum formed its attitudes over a century ago, and not much can be done to change its attitudes in the present.  Because of Stanton’s sense of dignity as a simulacrum, Louis almost considers it more human in a way than Pris, Maury or himself.  Stanton impresses Louis when it tells him that even though Pris has a temper and lacks patience, “she does not always listen to the dictates of her heart.  Sorry to say, sir, she often pays heed to the dictates of her head.  And there the difficulty arises” (61).  Stanton goes on to say that a woman’s logic differs from that of a philosopher and therefore amounts to a vitiated shadow of the knowledge derived from the heart.  It also notes that Louis has sensed this in Pris and concludes that she has a coldness about her as a result. 

Louis begins to wonder how Stanton got this information and suspects that Maury may have inserted it in Stanton with an information tape, or perhaps Pris did it herself.  “Was Pris herself responsible?  Was this some bitter, weird irony of hers, inserting in the mouth of this mechanical contraption this penetrating analysis of herself?  I had the feeling it was.  It demonstrated the great schizophrenic process still active in her, this strange split” (63, original emphasis).  Afterwards, however, Stanton claims to have been seriously worried about Pris; but since a simulacrum has no heart, brain or consciousness, what Stanton says could stem entirely from the language programmed into it, whether by Pris or Maury.  The qualia that Stanton expresses cannot be carved off from the rest of consciousness and placed on one side.  As John Searle puts it, you can’t talk about “consciousness while ignoring the subjective, qualitative feel of consciousness.  But you can’t set qualia on one side, because if you do there is no consciousness left over” (1997, 29).  Obviously, then, Stanton has no qualia defined as the qualitative feel of consciousness; whatever it says has been programmed into its transistors and relays.  This phenomenon again implies that a contraption like Stanton can’t have a positive effect on the collective consciousness of those around it because as a simulacrum it has no glimpse of the unified field of natural law from which to enliven and heighten the awareness of others around it.  As Searle explains in The Mystery of Consciousness,

Thus the logical structure of his [Penrose’s] book is as follows: in the first half he argues that Godel’s theorem shows that there are mental processes that are noncomputable, and what he thinks is true of the Godel results, he thinks is true of consciousness in general.  Consciousness is noncomputable because human consciousness is capable of achieving things that computation cannot achieve; for example, with our consciousness we can see the truth of Godel sentences and such truths are not computable. (1997, 81)

Stanton and later Lincoln have computational abilities, but they lack the noncomputable qualitative feel of consciousness.  Pris herself admits to Louis that neither Stanton nor Lincoln can be restored to life because they’re already dead.  “The spirit isn’t there,” she says, “just the appearance” (65).  The absence of a spirit in the globalized production of simulacra that interact with humans will ultimately result in a reduction in the level of collective consciousness because simulacra have no knowledge of the physiology or the unified field of natural law and the relationship between the two.

            Since the self as pure consciousness has not been readily available to direct experience, its very existence has been put in question by an approach to knowledge that dwells on the contextual, interpretive, and metaphorical dimension of language.  This epistemological stance derives largely from the tradition of Western philosophy, which has never discovered an effective means of integrating the two faculties of rationality and intuition.  Western metaphysics itself depends on intellectual analysis or blind faith.  It lacks the systematic and practical means of investigating the self both subjectively and objectively that are found in Eastern philosophies.  As a tradition that highlights the self and its role in constructing the world we live in, Eastern philosophy as a whole and Vedanta in particular have much to offer the West in terms of developing spiritual empowerment.  If the implementation of Eastern philosophy leaves something to be desired, its basic principles are upheld by recent discoveries in modern science, particularly quantum physics.  In terms of readers, We Can Build You has the effect of swinging their awareness from the concrete to the abstract, from the notion of a computational simulacrum to human consciousness, which leads to a self-reflexiveness that gives readers a glimpse, however briefly, of their own ability to experience self-consciousness.  If a simulacrum were to read this novel, however, it would not encounter this swing of awareness from the concrete to the abstract dimension of the self because it lacks consciousness and the ability to experience it.  Humans, on the other hand, through the globalization of bionic technology may also lose this ability if access to consciousness become undermined either technologically or through a dissemination of simulacra/cyborgs that subverts the collective consciousness of society as a whole.  

            Pris thinks that “The real Lincoln exists in my mind,” but Louis says, “You don’t believe that.  What do you mean by saying that?  You mean you have the idea in your mind” (66, original emphasis).  The fact that Pris identifies with the simulacrum suggests that because of her psychosis she may have lost a degree of knowledge of the human physiology and the unified field of natural law to the extent that she herself has the computational qualities of a cyborg.  My argument here is simply that humans, through the globalization of simulacra/cyborgs, may become distanced from pure consciousness as the void of conceptions.  Pris has ideas about Lincoln, but as Louis intuits she seems to be losing contact with the noncomputational aspect of consciousness.  When Louis asks Pris if she’s proud having contributed to the construction of Lincoln, she replies, “I know what I’ll feel.  Greater despair than ever” (69), which implies that the relation between humans and simulacra has little positive effects.  Indeed, even Lincoln is a problem.  Louis notices that when it first takes notice of itself, Pris and Maury, it experiences fear: “It was fear as absolute existence: the basis of its life.  It had become separate, yanked away from some fusion that we could not experience—at least, not now.  Maybe once we all had lain quietly in that fusion.  For us, the rupturing was long past; for the Lincoln it had just now occurred—was now taking place” (72-73). 

Although Dick seems to suggest that Lincoln has a sense of being out of its natural context, Louis intuits again that this fear is not really fear so much as an absolute dread that results in apathy stemming from the defamiliarization of its surrounding world.  Louis tells Pris a story about finding a bird fallen out of its nest, and when he went to pick it up and return it to its nest, the baby bird opens its beak wanting to be fed.  Louis concludes that “there’s benevolence and kindness and mutual love and selfless assistance in nature” (69), which Lincoln does not experience given that it is no longer part of nature nor its historical context.  When Lincoln groaned, Maury asks what it said, but Bundy explained, “Hell, it’s a voice-tape but it’s running through the transport backwards” (74) due to an error in the wiring, another indication of the problems the globalization of cyborgs will bring to human civilization.

Pris notes something ominous and sad about the construction of Stanton and Lincoln, “something upsetting to all of us, that was just too much for us to handle” (85).  Pris and Louis then discuss Pris’s cutting observations of other people and whether or not she believes them, or are they just off-hand observations derived from her mental instability.  Louis then expresses his love for Pris, an indication that he himself has a tendency toward mental instability.  “You’ve had a fascinating history.  Schizoid by ten, compulsive-obsessive neurotic by thirteen, full-blown schizophrenic by seventeen and a ward of the Federal Government, now halfway cured and back among human beings again but still—”, he breaks off reciting her lurid history and then admits, “I’ll tell you the truth.  I’m in love with you” (85).  Pris senses that Louis fears her and says, “If you could conquer you fear you could win a woman; not me but some woman” (86).  Pris then says, “We’re like gods in what we’ve done, this task of ours, this great labor” (ibid.).  But when they introduce Lincoln to Barrows, the discussion become complicated.  Barrows says he can tell that Lincoln is not human or animal: “An animal has biological heritage and makeup which you lack.  You’ve got valves and wires and switches.  You’re a machine. Like a—[. . .] Spinning jenny.  Like a steam engine” (108).  Lincoln then replies, “Then you, sir, are a machine. For you have a Creator, too.  And, like ‘these fellows,’ He made you in His image.  I believe Spinoza, the great Hebrew scholar, held that opinion regarding animals; that they were clever machines.  The critical thing, I think, is the soul.  A machine can do anything a man can—you’ll agree to that.  But it doesn’t have a soul,” to which Barrows replies, “There is no soul. That’s pap” (108).

Lincoln then concludes that without a soul a machine is the same as an animal, which in turn is the same as a man.  But Barrows still claims that a machine is made out of wires and tubes, like Lincoln, but a human is not.  Nevertheless, Lincoln has touched on the possibility that human may become machines if through the globalization of simulacra they becomes cyborgs themselves and lose contact with the unified field of natural law.  This begins to happen to Louis when he starts losing his mind and is sent to the Kasanin Clinic like Pris.  While having artificial medical treatment, which has little benefit for Louis, unlike the Ayur-Vedic treatment that takes a holistic approach to health, Louis begins fantasizing about Pris, imagining her presence and having illusory conversations with her.  At the end of the novel, Louis finally gets released from the Kasanin Clinic, but Pris tells him, “I lied to you, Louis.  I’m not up for release; I’m much too sick.  I have to stay here a long time more, maybe forever.  I’m sorry I told you I was getting out.  Forgive me” (246).   When Louis leaves the Clinic, he sees Pris weaving a virgin black sheep’s wool, with no thought of him or the simulacra.  Her mind now resembles that of Stanton and Lincoln, cut off from the experience of pure consciousness.  Louis feels he has everything but Pris, including the possibility of someday having a glimpse of the true nature of his inner self. 

The mind with its conceptuality is no more than the content of consciousness.  As John Searle says, “Nowadays most functionalists would say that mental states are ‘information-processing’ states of a computer.  According to the extreme version of computer functionalism, which I have baptized ‘Strong Artificial Intelligence,’ or ‘Strong AI,’ the brain is a computer and the mind is a computer program implemented in the brain” (1997, 142).  But Searle concludes that the mind and consciousness are two separate entities: “consciousness has a first-person or subjective ontology and so cannot be reduced to anything that has a third-person or objective ontology.  If you try to reduce or eliminate one in favor of the other you leave something out.  What I mean by saying that consciousness has a first-person ontology is this: biological brains have a remarkable biological capacity to produce experiences, and these experiences only exist when they are felt by some human or animal agent” (1997, 212).  Given that consciousness can only exist when it is experienced as such, we can argue that Stanton and Lincoln do not have the self-consciousness of the void of conceptions like humans, but that humans who interact excessively with simulacra may end up losing this self-consciousness and be reduced to the mind alone, the content of consciousness. 

Dick’s next novel discussed here, Do Androids Dream of Electric Sheep?, takes place after World War Terminus in 1992 has created a radioactive cloud of dust across the globe.  This novel by Dick reconfirms the risk of simulacra, or in this case androids, which now have evolved far beyond the simulacra of Stanton and Lincoln to cause a greater risk for the human race, even after the Terminus War reduced many humans to “specials,” people mentally reduced to near zombies by radiation and not allowed to leave the planet like normal humans and androids.  Cities are now underpopulated and many people own animals, considered a status symbol and a sign of empathy, whether the animals are real or electric.  Many androids such as Roy Baty resent the fact they have not been granted the status of humans, mainly because as cyborgs they lack the ability to experience the intersubjective space of pure consciousness.  This inability renders then hostile to humans because they lack empathy for those responsible for their limited abilities.  All androids have a limited life span of four years and have been designed to serve as slaves for biological humans.  As a result they have started to revolt against their non-human condition in the hope of expanding their life span and decide to kill humans who won’t help them, especially those responsible for creating them.  The underlying ambiguity of the novel for bounty hunters like Rick Deckard is whether or not androids have consciousness, which they of course don’t.  Katherine Hayles as we have seen argues that theoretically androids and machines in general can achieve the status of humans.  Defining the living in terms of the capacity to embody information, Hayles argues because cybernetic systems can enhance our information prowess, becoming cyborgs can greatly enhance humans.  Hayles also argues that brain and computer, like cognition and metaphor, can encompass the world’s complexity more efficiently.  As Daniel Dinello says,

Tortured by the absolute certainty of suffering, growing old, and dying, the mostly white, affluent, male prophets of perfectibility put their faith in technology to save humanity by transubstantiating the organic body.  At the transhuman stage—a temporary step on the way to a new posthuman species—human bodies will become synthetic.  Life will be prolonged and enhanced through cyborgization—body-improving prosthetic technology that will replace deteriorating body parts.  ‘We are on the path to changing our genome in profound ways,’ says MIT Professor of Computer Science and Engineering Rodney Brooks.  ‘The distinction between us and robots is going to disappear.’  In fact, many have already become cyborgs—machine-organic fusions [. . . ]  Soon we will have new hearts and brains.  ‘In the end, we will find ways to replace every part of the body and brain, and thus repair all the defects that make our lives so brief,’ says techno-priest and artificial intelligence pioneer Marvin Minsky. (2005, 19)

Obviously, this transubstantiation of the organic body is what androids and humans in Do Androids Dream? seek.  Many scientists see cyborgs as opening the possibility of seeing technology differently, as something that not only has its own agency but also its own generative, reproductive possibilities.  As R. L. Rutsky notes,

The position of human beings in relation to this techno-cultural unconscious cannot [. . .] be that of the analyst (or theorist) who, standing outside this space, presumes to know or control it.  It must instead be a relation of connection to, of interaction with, that which has been seen as ‘other,’ including the unsettling processes of techno-culture itself.  To accept this relation is to let go of part of what it has meant to be human, to be a human subject, and to allow ourselves to change, to mutate, to become alien, cyborg, posthuman.  This mutant, posthuman status is not a matter of armoring the body, adding robotic prostheses, or technologically transferring consciousness from the body; it is not, in other words, a matter of fortifying the boundaries of the subject, of securing identity as a fixed entity.  It is rather a matter of unsecuring the subject, of acknowledging the relations of mutational processes that constitute it.  A posthuman subject position would, in other words, acknowledge the otherness that is part of us. (1999, 21)

The metaphorizing and cognizing capacity of the conscious mind, however, constitutes only half the equation of being human, with the other half including emotion, judgment, intuition and free will.  Even androids in science fiction are limited to an intersubjective space, if they have one, confined to agreement and objective properties.  Androids in Dick’s novel are posthuman speculative metaphors described by theorists such as Hayles and Haraway.  What these theorists fail to recognize is that ethics, free will and consciousness go beyond physical laws. 

In Dick’s novel the female androids are either the schizoid woman who function like machines but come across as humans, like Pris Stratton, and the more empathetic dark-haired girls like Rachael Rosen, who Rick Deckard falls in love with.  Hayles argues that this confusion gives Do Androids Dream? “its extraordinary depth and complexity.  The capacity of an android for empathy, warmth, and humane judgment throws into ironic relief the schizoid woman’s incapacity for feeling.  If even an android can weep for others and mourn the loss of comrades, how much more unsympathetic are unfeeling humans?  The android is not so much a fixed symbol, then, as a signifier that enacts as well as connotes the schizoid, splitting into the two opposed and mutually exclusive subject positions of the human and the not-human” (1999, 162).  This interpretation, however, elides the fact that fictional androids only have a limited empathy given to it by its human author, which means it can’t surprise the reader or author by expressing empathy “out of character,” as it were.  Whereas an android behaves in a manner predetermined by physical laws, a fictional human character can appear endowed with all the elements of interiority.  By defining the authentic human in terms of the opposition “inside”/“outside,” Hayles understands freedom as the capacity to get “outside” the commodity boundaries encapsulating a technological artifact (162).  If we define androids, however, in terms of the cognitive aspect of an isolated subjectivity, then they would fail to get outside their private selves or their commodity encapsulation.  Similarly, androids would fail to communicate outside of language use by means of trans-linguistic, intersubjective expressions. 

All freedom remains within boundaries even though relative degrees of material freedom and interiority exist.  An individual can feel ultimate freedom only in Atman/Brahman beyond individuality, in pure consciousness or the void of conceptions.  The highest level of subjectivity thus encompasses freedom, which androids cannot achieve because they have no ability to transcend their material condition.  Subjectivity constitutes a non-physical presence, whether individual or communal, suggesting a pure witnessing awareness beyond phenomenal properties or qualia, a transpersonal “I am” not attainable by androids.  Although Dick may intend his replicants to simulate the capacity for trans-physical, intersubjective experience, he would have difficulty doing this because replicants only have physical causes.  Perhaps Dick experiences a conflict within himself when he tries to create not-human characters that he would like to appear as possessing self-awareness.  As the reader knows, however, an android’s self-awareness will definitely remain speculative rather than real.  Although Daniel Dennett’s theory of heterophenomenology uses a third-person approach in assessing the consciousness of other people, or their introspective awareness, what it really does is merely assess the data of the mind, or the content of consciousness.  As Christian Beenfeldt notes in “A Philosophical Critique of Hetero-phenomenology,” “In accordance with his overarching materialistic philosophical project, Dennett tactically insists that the issue always be put in terms of verbal reports about such supposed data [of the mind].  One should not lightly grant the assumption that the basic issue is human reports about self-awareness, as against the phenomenon of self-awareness itself” (14).  Beenfeldt points out, therefore, that heterophenomenology only focuses on the sensory perception of physical objects, namely verbal report on the content of consciousness, not consciousness itself.  Dennett’s theory, therefore, fails in providing access to consciousness itself, which is why in assessing the mental data of an android and a human, they would appear to be the same, since consciousness itself remains inaccessible to heterophenomenology. 

Although Dick can depict the qualities of any character and even suggest an ineffable self beyond qualities, the desire of Roy, Pris and Rachael to attain the status of a human only extends to the outer qualities of life, for the inner qualities are aligned with a sense of morality and responsibility, which they lack.  Although Dick’s androids may at times seem more evolved than immoral humans, they remain deficient in morality themselves, lacking the signs of a strong interiority.  They do not surprise the reader with responsible behavior toward humans.  In the mutual attraction between Rick Deckard and Rachael, her attraction stems largely from her fear of survival and lacks the interiority of romantic love.  If androids had the capacity for pure experience, for a preinterpretive, preprogrammed stage of perception, they might have been able to surprise the reader with their behavior.  But even the apparent experience of interiority that androids seems to have remains an illusion.  As Beenfeldt says, “Notice, then, Dennett’s double standard: he is pushing a highly skeptical agenda when it comes to self-awareness, but he is perfectly happy to accept  much less skepticism when it comes to knowledge of the external world” (15).  Androids are human constructs whose interiority is a fabrication, not a genuine inner experience witnessed by pure consciousness or the internal observer, which they lack.

Androids can also be approached through the zombie problem and what Owen Flanagan calls “conscious inessentialism,” a dominant theory of the philosophy of mind.  Conscious inessentialism is defined as “the view that for any mental activity M performed in any cognitive domain D, even if we do M with conscious accompa-niments, M can in principle be done without these conscious accompaniments” (Flanagan 1991, 309).  Conscious inessentialism as the name implies entails that being conscious is not necessary for any given behavior.  Any insentient or noncon-scious being like an android can appear to function as having the ability to think.  If conscious inessentialism is true, then zombies could exist even outside of science fiction.  Zombiehood or cognition without conscious awareness may on the basis of physical presence still allow for intersubjective agreement.  Through third-person observation, however, scientific empiricism can determine physical presence but not intersubjective agreement.  One reason science accepts conscious inessentialism is that it has no way to determine the difference between androids and humans.  The only way to see the difference would be to participate in a transinterpretive intersubjective space, a translinguistic experience accessible only to human with the freedom to witness thoughts on the basis of pure awareness.

In “Conversations with Zombies,” Todd Moody distinguishes between zombies and humans, arguing that we can spot the “mark of zombiehood” “not at the level of individuals but at the level of speech communities” (1994, 197), which constitutes a level of intersubjectivity.  In distinguishing between our English language and the hypothetical language of zombie-English, he argues that to understand English exceeds a computer’s ability to “produce passable answers to questions” (197).  He points out that “the word ‘understand’ in English refers not only to what sorts of performances a person is capable of, given certain inputs and outputs, but also to a particular kind of conscious experience” (197).  Zombies, therefore, lack the qualitative feel or “something it is like” to understand English.  The nonverbal feel of what it’s like to communicate emerges in the intersubjective space between reader and author, the cultural domain that includes the physical but simultaneously transcends it, which zombie cannot achieve.   

Zombies like the androids in Do Androids Dream? lack the ability to see internally.  As Moody says, if zombies watch a SF film like the Terminator with a readout on the bottom of the screen indicating distance to target, its shape, the velocity of the bullet, etc., it would make no sense to them like an ordinary human because, as Moody says, “the idea of ‘internally seen’ readouts has no zombie analogue or purpose” (1994, 198).  This phenomenon, therefore, again implies that androids lack access to the unified field of natural law, and thus when they interact with humans they degrade the collective consciousness of human society.  Any device in a film or novel that indicates internal seeing would makes sense to Dick’s androids or the Terminator only in the limited space of intersubjective agreement on a physical level.  Androids and simulacra can therefore only interpret exterior signs by engaging in an intersubjectivity that is really an interobjectivity.  As we have seen, moreover, even interobjectivity should be treated with skepticism, for although heterophenomenology claims to be neutral, even Dennett says that nothing is perfectly neutral. 

As Beenfeldt notes, heterophenomenology is therefore “unsuccessful” (25).  He asserts that “if we conclude that heterophenomenology is not a valid method for investigating human consciousness, what method should we then employ?  Dennett, of course, wants to say that the only alternative to heterophenomenology is a kind of gullible auto-phenomenology, inherently at odds with any attempt to approach the explanandum in a systematic, methodological and scientific manner” (28).  Autophenomenology, however, provides the only way to experience pure awareness for humans, for as we have seen a third-person methodology provides no direct insight into consciousness.  Beenfeldt concludes that “The human senses have been relied upon every day for life or death purposes, let alone for much, much later scientific experimentation, since the dawn of time.  Surely, then, a comprehensive microphysical understanding of some real fact, such as gravity, photosynthesis, telescopic vision, sensory perception—or for that matter, our powers of introspective self-awareness—is not a prerequisite for approaching that fact in a scientific manner.  On the contrary, it is the scientific approach to those yet-to-be-understood phenomena that eventually made the deeper subsequent understanding possible” (29).

While Moody notes that zombies could talk about concepts like understanding, they would not be able to “originate these exact concepts as they are played out in philosophical discourse and imaginative idea-play, as in science fiction” (1994, 199).  Although zombies could talk about internal seeing or dreaming, which may not require consciousness, “the emergence of those concepts in a language community does” require consciousness; Moody therefore concludes that “at the level of culture, conscious inessentialism is false” (199, Moody’s emphasis).  This claim, moreover, is qualified by the distinction Christian de Quincey proposes between the weak inter-subjective space shared by zombies and the nonphysical space of ontological participation shared by humans.  Although direct intersubjective engagement will include linguistic exchanges, it will not be accomplished by this alone.  It will only be achieved by the “accompanying interior-to-interior participatory presence” beyond cultural constructivism, beyond, in other words, the physical domain (188, de Quincey’s emphasis).  The question of Dick’s novel, Do Androids Dream of Electric Sheep?, has to be answered “yes and no.”  Zombies or androids may experience communicable empirical content, but they would lack a sense of what the experience is like to have that content.

Although Dick’s novel forbids human-android sex, Rick Deckard wants to sleep with Rachael and then retire Pris and the other androids.  He wants to kill Rachael after sleeping with her but finds that he can’t.  She warns him that because he now feels empathy for androids, he may end up getting “retired.”  She says, “You realize what this means, don’t you?  It means I was right; you won’t be able to retire any more androids; it won’t just be me, it’ll be the Batys and Stratton, too” (1968, 177).  Although Rick at first has doubts about his empathy, he ends up retiring the other andies and then uses his bounty money to by an electric sheep.  Taking revenge on Rick’s object of affection and his having killed her friends, the other andies, Rachael pushes his electric sheep off the roof.  By combining human and not-human attributes, or what Hayles calls the inclination of both the passionate dark-haired girl and the calculating schizoid woman, Rachael penalizes Rick in an attempt to escape being an android and becoming an authentic human, but her efforts fail.  Rachael and Rick ultimately belong to two different orders of reality, the constructed simulacrum and the human with access to the unified field of natural law, the former predetermined and predictable and the latter consciously co-creative.  As a corporate commodity for humans, Rachael’s freedom is limited by the computational constraints of a bionic technology becoming increasingly globalized, which cannot lead to an experience of pure consciousness.  Even though Rick finds Rachael attractive and alluring, she remains confined by physical laws and cannot escape into the dominating world of humans.

Like Rachael, Rick also experiences the oscillation between passion and calculation, a shifting subject position that destabilizes Rick’s reality.  At the end of the novel, however, he returns to his wife with his consciousness intact.  As the novel suggests, the difference between Rachael and Rick entails an intersubjective space experienced by humans and an interobjective space experienced by androids.  As intelligent machines, as Jean Burns says, androids “encompass only determinism and randomness and the latter are not what is meant by volition” (1999, 44).  As Burns asserts:

Volition is an aspect of consciousness. [. . .]  It is not possible to know all the specific details governing a tornado, but we do not ascribe free will to it on that account.  So there is no way, conceptually, to trace the physical effects of volition back to presently known laws. (1999, 32)

Androids lack the basis of volition because they have no conscious awareness.  We can ascribe volition, aesthetically speaking, to them imaginatively; however, they lack the free will of human characters.  The behavior of Rachael and Baty fall under physical laws subject to conceptual closure, but Rick’s behavior can take the reader beyond conceptual limits because it is open ended, with access to knowledge of the unified field of  natural law.  While the behavior of androids shows a longing for physical preservation, Rick’s behavior shows a desire to contact others beyond physical and conceptual boundaries.  Even the chickenhead J. R. Isidore tries to connect with Pris, showing that human behavior may appear contradictory and inexplicable, while the androids simply want to overcome their planned obsolescence. 

            Unlike androids, therefore, humans have the ability to transcend the expressive dimension of the novel and glimpse the transconceptual, transpersonal state of consciousness.  This glimpse or taste (rasa in the Advaitan tradition) constitutes the strongest form of intersubjectivity.  Because androids lack the capacity for internal seeing, they cannot glimpse the interior-to-interior connection that underlies the co-creative space of ethical values and volition in humans.  This inability to connect with free will and ethical values renders androids an increasing threat to humans as they become more prevalent through a globalized bionic technology, which someday could seriously diminish the intersubjective sublime in humans. 

            Another sign that Dicks’ androids are not human derives from their inability to communicate with Mercer, the figure who established the moral system of Mercerism that emerges when a person grips the handles of the empathy box.  In Do Androids Dream?,  humans have the emotional capacity to fuse with Mercer through an interior-to-interior connection, an intersubjective space, which androids as not-humans lack.  Roy Baty at one point feels vindicated when Buster Friendly, the host of a radio talk-show, seems to expose Mercer as a fraud: “Wilbur Mercer is not human, does not in fact exist. [. . .]  Mercer is a swindle!” (184, Dick’s emphasis).  The ambiguity of whether Mercer is human or android intensifies when Mercer admits to Isidore he’s a fake, yet also says, “nothing has changed.  Because you’re still here and I’m still here” (189), a situation that co-creates a nonphysical presence.  If Mercer were an android, then why does he save Rick Deckard in his battle with Pris Stratton: “Manifested himself and offered aid.  She—it—would have gotten me, he said to himself, except for the fact that Mercer warned me” (196).  Although Mercer tells Rick killing androids is wrong but also necessary, Rick says, “Mercer is not a fake. [. . .]  Unless reality is fake” (207).  In the novel, Mercer creates an ambiguity between human and not-human, self and other, by merging their qualities, which leads Deckard to say, “I’ve become an unnatural self” (204), which for some readers suggest that he could be an android himself.  He knows that “For Mercer everything is easy . . . because he accepts everything.  Nothing is alien to him” (204).  Like all humans, however, Rick embraces consciousness and computation.  Mercer can’t be an android is he fuses with humans through an empathy box based on intersubjectivity independent of exterior tokens.  As David Chalmers puts it, “I will argue that consciousness escapes the net of reductive explanation.  No explanation given wholly in physical terms can ever account for the emergence of conscious experience” (1996, 93).    He continues that “to explain consciousness, the features and laws of physical theory are not enough.  For a theory of consciousness, new fundamental features and laws are needed” (1996, 127).

            Buster Friendly on his radio talk show challenges Mercer’s ulterior motive to exploit human through Mercerism, asking, “What is it that Mercerism does?  Well, if we’re to believe its many practitioners, the experience fuses men and women through-out the Sol System into a single entity.  But an entity which is manageable by the so-called telepathic voice of ‘Mercer.’  Mark that.  An ambitious politically minded would be Hitler could—” (184-85).  Although there could be potential for abuse, Mercerism offers the possibility of deliverance from cognitive boundaries that prevent fusion, and as we know Mercer saves Deckard’s life.  By embracing the fake and the real, Do Androids Dream? rejects the logic of either/or on Mercerism.  The fact that androids fail to experience fusion with Mercer implies that a physical subject without the experience of interiority cannot generate the internal sight necessary for true intersubjectivity.  Although the interiority of phenomenal consciousness is “the intrinsic qualitative essence of physical entities themselves, as panpsychists have always suggested” (Uus 1999, 49), Deckard and Baty, humans and androids, still differ.  Even given that androids and everything in the universe is consciousness, only authentic human can experience consciousness self-reflexively.

            According to Hayles, the human interface with cybernetic mechanisms does not indicate the end of humanity but only “the end of a certain conception of the human” (1999, 286).  She also argues that “we can no longer simply assume that consciousness guarantees the existence of the self.  In this sense, the posthuman subject is also a postconscious subject” (1999, 280), which implies that androids are not only posthuman but also postconscious.  She also suggests we replace Jacques Derrida’s cybernetic presence/absence dialectic with the cybernetic pattern / randomness dialectic formulated by Gregory Bateson and others.  Derrida defines presence as signifying Logos, God, consciousness and teleology, a metaphysical origin that makes meaning and reality stable and coherent.  Derrida attempts to destabilize the presence/absence hierarch by deconstructing the metaphysics of presence and thereby subverting the plenitude of coherent meaning.  Although Derrida’s claim that the absence of an originary presence makes thought undecidable, his premise is driven by a misunderstanding of consciousness, which he substitutes with the freeplay of the signifier.  The content of the mind replaces pure consciousness as a void of conceptions, and the individual is consigned to a linguistic field with no access to the translinguistic.  But as Chalmers notes, “Our grounds for belief in consciousness derive solely from our own experience of it” (1996, 101).

            By substituting pattern/randomness for the poststructuralist presence/absence, Hayles departs from Derrida and replaces the teleology of a known end with a trajectory of pattern/randomness that remains open ended.  Randomness, marked by contingency and unpredictability, renders meaning possible by expanding it beyond given boundaries.  For Hayles, electrical engineering involves a randomness that “has increasingly been seen to play a fruitful role in the evolution of complex systems” (1999, 286).  The models she refers to all see randomness not as an absence of pattern but as “the ground from which pattern can emerge” (286).  Serving as a means of providing a complex system, such as androids, randomness has the potential to expand beyond the box in which androids are contained to a larger system of unknown complexity.  This move for Hayles toward greater unboundedness through pattern/randomness is evolutionary and becomes a surrogate presence or consciousness at the basis of volition and ethics.  As Burns and Uus argue, however, no presently known physical laws, including randomness, can account for con-sciousness, free will or ethics.  Like all material forces, therefore, pattern/randomness can produce only simulacra.  Chalmers argues that “if consciousness is not logically supervenient on the physical, then materialism is false.  The failure of logical super-venience implies that some positive fact about our world does not hold in a physically identical world, so that it is a further fact over and above the physical facts” (124). 

            Hayles denies that conscious agency is the essence of human identity, considering it rather an illusion.  She believes that humanist who seek mastery over the environment are never in control, and that the best thing would be a partnership between humans and machines for networking the complexities of the external world.  But the information rich environment nevertheless suffers from a paucity of human attention, which Hayles believes falsely can be compensated by intelligent computers that do things like sort out the glut of emails (1999, 287).  These machines, however, do not improve the capacity for interiority necessary for judgment or understanding.  In fact, they would weaken subjectivity, compromise ethics, and replace inter-subjectivity with interobjectivity.  Hayles denies Charles Ostman’s fear that surrendering decision-making to a computational ecology would imply surrendering human judgment to machines because judgment and agency have never been controlling factors.  As we have seen, however, volition and judgment are originated only by humans, not androids, as in the creation of art.  Because these effects cannot be explained by randomness or computers does not make them illusory. As Beenfeldt notes, we can study the mental without resorting to heterophenomenology: “This is not to concede that self-awareness is some obscure phenomenon direly in need to scientific validation; it is, on the contrary, something that is most familiar to us all.  The point is simply to argue that science and introspective self-awareness by no means are at odds” (30, original emphasis).   Similarly, Chalmers argues that  “Consciousness is at the very center of our epistemic universe, and our access to it is not perceptually mediated.  The reasons for expecting a materialist account of external phenomena therefore break down in the case of consciousness, and any induction from these phenomena will be shaky at best” (1996, 169).

            Daniel Dinello concludes his book on Technophobia! with an argument that science fiction shows the negative side of bionic technology.  He argues that

At its most pessimistic, science fiction depicts humans as the victims of a ubiquitous, oppressive technological force.  Despair, cynicism, and fatalistic thought often rationalize capitulation to the apparent inevitability of technological expansion. This ensures that the future will be as horrific as it looks.  But the realization of powerful repression often provokes an equally powerful response that shifts the dynamic.  Science fiction does more than simply reflect cultural despair and techno-phobia—it wakes us up to a technological world order whose rule is supported by cyborg weapons, corporate greed, macho militarist posturing, governmental war-mongering, and techno-religious propaganda.  Opposing fatalism and surrender to the status quo, science fiction often argues for a progressive political agenda, urging us to ask questions and confront the ideology of technototalitarianism.  At its best, science fiction projects a dark vision of the Technologist’s posthuman future that encourages us to create a better one.  (2005, 275)

Dinello has the vision to see that posthuman bionic technology will not enhance humanity but rather, as this book argues, will undermine human nature and lead to a cultural situation where cyborgs may rule the world and ultimately destroy the human species.  R. L. Rutsky has a similar vision at the end of his book when he claims that

      Yet, as both Haraway and Gibson have suggested, the realm of techno-culture is also a science-fictional realm, where small changes can generate profound and unpredictable mutations in the future.  Indeed, the processes of the techno-cultural unconscious are the processes though which the future emerges.  In such a realm, however, the future need not be simply ‘human,’ need not be predicated solely on the ‘utopian’ politics of human enlightenment and empowerment; other future are possible, imaginable.

      To imagine our relations to the techno-cultural unconscious is to imagine our relations both to ‘others’ and to these ‘other’ futures.  These ‘other’ futures cannot be represented through rational analysis and prediction; they can only be imagined through a science-fictional process—an imaginative, aesthetic process that is similar to the ‘bringing forth’ that Heidegger saw in the Greek techně.   In imagining, as Butler, Haraway, and Gibson do, our own virtual gods and goddesses, our own alien and cyborg myths, to represent our relationship to technology, to the techno-cultural other, we are already participating in the ongoing process and movement, in the science-fictional mutations and evolutions, in the high techně, through which other futures emerge, are brought to life, ‘brought forth.’ (1999, 158, original emphasis)

Both Dinello and Rutsky, therefore, see the effects of technological globalization as a potential threat to human culture.  As Manfred B. Steger puts it in Globalization, “Cultural globalization has contributed to a remarkable shift in people’s conscious-ness.  In fact, it appears that the old structures of modernity are slowly giving way to a new ‘postmodern’ [or posthuman] framework characterized by a less stable sense of identity and knowledge” (2003, 75).  As people become cyborgs through globalization, many aspects of human consciousness will change for the worse.  Steger also notes that “Reinforced on a daily basis, these persistent experiences of global interdependence gradually change people’s individual and collective identities, and thus dramatically impact the way they act in the world” (12).  Charles Taylor in Sources of the Self explains that “Prudence constantly advises us to scale down our hopes and circumscribe our vision.  But we deceive ourselves if we pretend that nothing is denied thereby of our humanity” (1989, 520). 

            Countering the globalization of bionic technology, John Fagan argues that a more natural approach to human nature provides greater benefit than genetic engineering.  He asserts that a “knowledge-based approach provides knowledge of the unified field of natural law, knowledge of physiology, and know-ledge of the relationship between these two.  From this comes applied knowledge of a whole range of technologies that awakens nature’s inner intelligence within the mind and body, thereby enlivening the body’s own healing powers” (1995, 99).  Ken Wilber and Jonathan Shear also support this perspective.  As Wilber puts it, “This ‘subject permanence’ is a constant state of witnessing carried unbroken through waking, dream, and deep sleep states, a constancy which, I entirely agree, is prerequisite and mandatory to full realization of nondual Suchness (and a constancy which, if you have experienced it, is unmistakable, self-referential, postrepresen-tational, nondual, self-validating, self-existing, and self-liberating” (1998, 236).  No android or cyborg can have the experience of witnessing based on nondual Suchness, an experience of the void of conceptions or pure consciousness which the globalization of bionic technology is in the process of undermining.  Fagan emphasizes that the inadequacies of a globalized technological culture can be traced to our current state of collective consciousness, which is restricted to surface levels and prevents thinking from being comprehensive and coherent in order to preclude culture from entering a destructive, anti-humanist technological phase. 

Works Cited

Beenfeldt, Christian, 2008,  “A Philosophical Critique of Heterophenomenology,”  Journal of
            Consciousness Studies
15:8: 5-34.

Burns, Jean E, 1999,  “Volition and Physical Laws,”  Journal of Consciousness Studies
            6:10: 27-47.

Chalmers, David J., 1996,  The Conscious Mind: In Search of a Fundamental Theory.  New
            York and Oxford: Oxford University Press.

Dick, Philip K.. 1968,  Do Androids Dream of Electric Sheep  New York: Ballantine Books.

---.  1994, We Can Build You,  New York: Vintage Books.

Dinello, Daniel, 2005,  Technophobia! Science Fiction Vision of Posthuman Technology
            Austin: University of Texas Press.

Fagan, John, 1995, Genetic Engineering: The Hazards; Vedic Engineering: The Solutions
            Fairfield, IA: MIU Press.

Flanagan, Owen, 1991,  The Science of Mind,  Cambridge, MA: MIT Press.

Haraway, Donna, 1991,  Simians, Cyborgs, and Women: The Reinvention of Nature
            London: Free Association Book.

Hayles, Katherine N., 1999,  How We Became Posthuman: Virtual Bodies in Cybernetics,
            Literature, and Informatics
,  Chicago and London: University of Chicago Press.  

Maharishi Mahesh Yogi, 1972,  Phonology of Creation, Part 2, La Antilla, Spain:
            Videotaped Lecture, 26 December.

Moody, Todd, 1994,  “Conversations with Zombies,”  Journal of Consciousness Studies
            1:2: 196-200.

Rutsky, R. L., 1999, High Techne: Art and Technology from the Machine Aesthetic to the
            Posthuman
,  Minneapolis, London: University of Minnesota Press.

Searle, John R., 1997, The Mystery of Consciousness, London: Granta Books; US and
            Canada: New York Review of Books.

Shear, Jonathan, 1990, The Inner Dimension: Philosophy and the Experience of  
            Consciousness
, New York: Peter Lang.

Steger, Manfred B., 2003, Globalization: A Very Short Introduction, Oxford: Oxford
            University Press.

Taylor, Charles, 1989, Sources of the Self: The Making of Modern Identity, Cambridge,
            MA: Harvard University Press.

Wilber, Ken, 1998, The Eye of Spirit: An Integral Vision for a World Gone Slightly Mad
            Boston and London: Shambhala.

Zawidzki, Tadeusz, 2007, Dennett.  Oxford: Oneworld.

Zohar, Danah, 1991, The Quantum Self.  London: Flamingo.