Consciousness, Literature and the Arts

 

Archive

 

Volume 15 Number 1, April 2014

___________________________________________________________________

Peter Swirski. From Literature to Biterature: Lem, Turing, Darwin, and Explorations in Computer Literature, Philosophy of Mind, and Cultural Evolution. Montreal: McGill-Queen’s UP, 2013. Hardcover, 252 pgs. Ɫ19.99. ISBN: 9780773542952.

 

Reviewed by

 

Gregory F. Tague

St. Francis College (N.Y.)

 

In a nutshell, Swirski’s book is fascinating. Erudite and sophisticated, elegantly written and witty, the book offers insight into the history and future of artificial intelligence. The book’s packed subtitle does not promise more than Swirski can deliver, and so eventually the reader is treated to an array of compelling information covering all subjects. As for biterary studies, the book will elucidate for the uninformed, for those hard-core traditionalists, and for any remaining post-modernists that not only is human culture a product of evolution but that literary arts might soon flow not only from an author’s pen but from an adaptable computer chip. Essentially, Swirski’s book is about creativity and patterns in nature, in human nature, and in computing and artificial intelligence. Although Swirski pushes the envelope with a challenging discussion about the question of whether or not computers can think, such daring discourse activates a high level of neuroplasticity in his readers who heretofore might have been brain-dead to such a concept.

 

Peter Swirski is professor of American literature and culture at the University of Missouri-St. Louis and the author of twelve previous books (and, according to his UMSL page, there are two more books forthcoming). Literature to Biterature is a handsome, well-constructed volume, and has numerous black-and-white photographs, Notes, Bibliography, and Index. The book consists of three parts and eleven chapters divided evenly for ease of reading, all organized in a satisfyingly cumulative way.

 

The thrust of the book argues that computers will eventually be able to turn out high-quality stories. Biologists Marion Lamb and Eva Jablonka have written about evolution in different dimensions, Brian Boyd has written about the origin of story, and now Swirski suggests that computer intelligence is evolving so that there will be biterature, “a species of literature” (7). Dare this reviewer offer the nomenclature bitic selection? Some readers no doubt will be put off by Swirski’s argument when he asks, is “thinking in computers different from that in humans?” (8). But he is not joking, and although he is playful at times, he seems quite serious. While Swirski has no crystal ball to peer into the future, and in fact he cites some examples of people who had made artificial intelligence prognostications (Hans Moravec and Ray Kurzweil) only later to scale back, he seems certain that some dimension of evolution will impact the design and function of computers. Swirski sees, in fact, an evolution of thought regarding natural selection itself, where theory about “self-organization and autocatalysis” (10) will enable a computer to integrate and develop naturally its own software and hardware. Critics will object to any analogy between that which is organic and that which is plastic; but already we see evidence of computer algorithms functioning, reacting, and increasing independently.

 

From this platform Swirski delves into Stanislaw Lem and computorship, attempting to define a basis for authorship. If there’s a person behind a program, how can a computer be said to create a piece of writing? But in the early 1950s the Manchester Mark 1 computer with Alan Turing was able to write short love notes, which of course caused a controversy about creativity. (Lem and Turing are clearly the heroes of Swirski’s book.) Computers are quite able to manipulate human code, viz Google Translate, but that is not a sufficiently creative act. Swirski points out that in the early 1980s Bill Chamberlain and Thomas Etter supposedly programmed a computer with a word synthesizer so that the machine could author original works, but there was more of the human hand involved in the process than the computer (29). The problem, Swirski suggests, is that from a human evolutionary perspective we will certainly attempt to interpret any ambiguous scrap of information that is put in front of us, even if it is written by a computer. So the machine might not exactly be creative, yet. And there are writerly programs that operate from enormous data chunks fed into the computer (e.g., Hemingway’s oeuvre), but these too are not real creativity which is, after all, “spontaneous” (34). Perhaps Swirski is being ironic: a real person might create spontaneously, but when she is in the process of creating will call forth, consciously or not, all the literary data she has read. Even with spontaneity there is still a question of worth. Will most people over time find what she has written worth reading again and again? Will a computer read and interpret differently? On 18 March 2014 BBC reported that a computer generated a story for the L.A. Times about a California earthquake minutes after the occurrence. This reviewer read the story, and it merely states facts in a dry manner making it worthy of the recycling bin once perused.

 

That’s where bitic selection now stands in terms of computorship. Contrary to what some have said concerning the inability of computers to surprise us with anything original, Swirski notes that we run programs precisely because they can tell us what we don’t already know. He cites instances of computer-generated artworks and musical scores in galleries and concert halls well attended and appreciated by human beings. The question is: “How do criteria of originality affect the criteria of originator?” (41). Indeed, there are, as there has been for quite some time, hack writers who compose poorly, use unimaginative words and phrases, and simply create dribbling prose from a formula that repeats itself. Yet we can accept such hacks as agents of creation but not a computer which will work similarly, since we don’t believe computers can learn and certainly cannot think. Surely the Holy Grail in writing a novel is, even more than the literary style, the creation of sympathetic and enduring characters. How can a computer do that? Swirski hovers around the answer to the question without quite landing.

 

In the early 1970s and 1980s, Swirski recalls, there were programs (AM and then EURISKO) that with a little success attempted the computer’s ability to learn, which can be an instinctual response or marking over inherent information. Learning is the capacity “to evaluate one’s own cognitive, conative, and affective states both at the ground level and at the meta level . . .” (47). Here Swirski is imagining a machine equipped with homeostasis or the means of adjusting physiology to maintain equilibrium. This is different than the early twentieth century Vorticist movement, beyond mere machine dynamism. Impressive as such systems appear, Amazon and Netflix, Swirski says, do not think: based on data we input (e.g., book selections) the program learns what we like and so generates more suggestions. The accumulation of date is not equivalent to thinking, and he bemoans the fact that in spite of decades and billions of dollars of research we have yet not developed a computer capable of thinking. We have processors that only style data.

 

However, Swirski seems confident that at some point computers will be able “to reprogram and redesign themselves . . .” so that their own hardware will be subject to self-analysis and updating (51). Such would be a machine which could evolve, adapt, think, and create. With these capacities, so-called computer authors will consist of distinct literary writers and not, as now, mere helpmates to scholars. The drawback: “With so much art on tap, tomorrow’s axiological criticism will become as obsolescent as a carrier pigeon is today” (66). We are not there yet, and the type of computer art deliberately imagined would be superior to a water color by a chimpanzee. What are the implications for biterature and biterary studies that a book such as Swirski’s exists, that intelligent, informed people are having this conversation? We evaluate and judge works in the context of others similarly placed. Swirski hints that with computorship human understanding will be quashed, made obsolete. In other words, who are we to say what is good or bad biterature? In his typically amusing, but not condescending or commonplace way, Swirski notes that we have plenty of self-inflated literary garbage already.

 

There is an inherent human resistance to anything artificially created, since many people still cling to the notion of special creation and the notion of a soul. For any machine to think or create on its own, says Swirski, is in the eyes of most people an act of “godless audacity” (80), pretty much the accusation hurled at Darwin. At the same time, human thought has generated, just as one example, stories about statues coming to life. Perhaps this is why those in Darwinian studies would appreciate this book, for indeed Swirski tries and succeeds to break forms and not adhere to any hide-bound codes, rules, or norms. He tries to do for the computer what Darwin did for the human: eradicate any mind/body duality. There will be no grand moment in computer literary creation, says Swirski, since it’s an evolutionary process. And for this reviewer we can see Swirski pushing the envelope as far as he can to the end of the table when he begins to talk about laws, legislation, and rights for computorship when we still fall far short in having established universal animal rights.

 

Much of Swirski’s book hinges on Alan Turing and the latter’s seminal question of whether or not a machine can be said to think. The basic Turing test follows this plan: an interrogator speaks to a fellow human being and to a computer, all three separated and anonymous, and if the interlocutor cannot distinguish answers between the person and the machine, then the machine has passed the thinking test. A consequence of what Turing suggests is to have artificial intelligence engineers fashion computers as “social beings” in advance of subjecting them to the test, since in large part the elements of the test are about sociality (98).

 

Thinking is part of consciousness, a most difficult area for neuroscientists (e.g., Antonio Damasio), cognitive psychologists (e.g., Joshua Greene), and philosophers of mind (e.g., John Searle). Swirski says the standard objection by Searle, whom he calls “shrill” (111), to the Turing test is the absence of consciousness. Playfully, tossing away the social brain hypothesis, Swirski posits that we don’t know whether or not other people are conscious anyway, but it helps us to believe so (101). Then there is the disability objection, which simply states that computers are not functioning persons, but to counter, Swirski reminds us that we all know people who are not functioning in any number of ways – are not friendly, cannot learn, or do not know how to use language properly. Concerning Searle’s strenuous objections to a thinking machine, no single part of a human brain thinks, per se, and yet the brain is constituted of so many of these parts. Thinking and understanding reside in a totality of neurons and synapses, in an “emergent property” (116).

 

Any complex system will exhibit an emergent state, which is to say such a system is self-organizing. Such behavior, though, is unpredictable (121), as it would need to be in order to adapt. In 1952 Turing refused to define thinking, since a machine could think and yet fail a Turing test (124). So even assuming that a computer can think, we have the added problem of mind reading or theory of mind, guessing intentions. Nevertheless, Swirski feels confident that, in time, computers will be integrated culturally and so would have the context to guess intentions. Right now, like Amazon and Netflix, Facebook makes (sometimes woefully) inadequate estimations about pages one might be curious to investigate. In human beings, of course, theory of mind is flawed and often inaccurate, though we utilize it continuously. Theory of mind is not only cultural in context but bodily, dependent on the expression and reading of emotions. How could a machine possess such biology? This is a difficult, and perhaps unfair, question that can be answered only with another question: “What will make . . . [a computer] want to want?” (145). Any answer has something to do with unpredictability, or what Darwin would call variation in competition that gets inherited.

 

At any rate, artificial intelligence is now moving to studying behavioral patterns with so called zoobotics that have been made to test evolutionary theories about adaptation (162). Moving well beyond robotic nursing and therapy by computer (not now uncommon), MIT created a chip that “simulates how brain synapses adapt in response to new information” (163). The same year (2011) IBM unveiled a ten hertz chip, operating at the same slow speed as the human brain, as part of a processor that would include over 250,000 “programmed synapses” and over 60,000 “learning synapses,” a stunning effort to reverse-engineer a brain (166).

 

Toward the end of his book Swirski explores robotic wars, specially made DNA bombs to target an individual, bacteria that could biocompute, “microbiotic armies” of “autonomous learning agents” (181), and micro-bots in the form of dust that can evaluate any given environment (182). In spite of such research and development, search engine giant Google cannot pass the Turing test since it does not know what the searcher wants and simply dumps out loads of data (199). But Swirski’s conclusion is that “the future belongs to artificial intelligence” (204), thinking and brilliantly creative computers, although there have been and will continue to be evolutionary blips and glitches along the way. As Darwin says in chapter six of On the Origin of Species, Natura non facit saltum.