Long before I started studying computer programming, I was fascinated by the subject of artificial intelligence. I had been enjoying science fiction novels since I was about ten years old (having exhausted the resources in the children’s section of the local library on Greek mythology, which had been my previous interest). The issue of non-human intelligence comes up frequently in sci-fi, whether it is highly intelligent animals (such as dolphins or other primates), aliens (which may or may not be carbon-based life forms), or the silicon-based “intelligence” of computers.
One of my favorite sci-fi authors for several years had been Robert Heinlein (until I read I Will Fear No Evil and decided it belonged in the trash rather than on the shelf of the English classroom in the Christian school where I taught Spanish, then tried to read Stranger in a Strange Land and didn’t even finish it). One of my favorites had been The Moon is a Harsh Mistress, and while I remember virtually nothing of the lunar colony revolt against rule by Earth, I remember the narrator’s relationship with a computer that had become self-aware.
I don’t know what scientist or science fiction writer first speculated that there was some threshold in terms of numbers of connections within a computer, past which it would become self-aware, but it seems to have become a common idea. In this novel, when the number “neuristors” in the HOLMES IV computer exceeded the number of neurons in the human brain, it develops self-aware. Mannie, the technician who works on it, calls it Mike.
At the end of the book, Mike has been damaged during an attack. The computer continues to operate, but Mike’s personality is gone. Mannie grieves for the loss of a friend, and I found myself also grieving. (This convinced me there was something wrong with me. I had cried when the horse died at the end of Marguerite Henry’s Black Gold, and even when this fictional computer personality died, but I couldn’t remember crying when any real human beings died.)
Other self-aware computers in sci-fi include HAL in 2001: A Space Odyssey, and many robots in Isaac Asimov’s short stories and novels. Bicentennial Man is all about a robot that wants to be acknowledged as a human. I remember one episode of Star Trek: The Next Generation, that was all about whether Data was a sentient being with rights.
Despite these entertaining stories and the significant ethical and philosophical issues they raise, my gut feeling has always been that they belonged quite rightly to the realm of fiction. Of course, one has to remember that much of what is reality today (travel to the moon, virtual reality, even experiments with machines that can be controlled by the brain) was once thought nothing more than speculative fiction.
With these thoughts in mind, I was eager to read Roger Penrose’s The Emperor’s New Mind when it came out. I read it during lunchtime at work, wading through discussions of quantum mechanics that were far more challenging than anything I had to do in managing the company’s hardware and software. I finally stopped trying to understand all the math and just read for the conclusions that were drawn. The philosophy and science discussed in the book were interesting, but in the end I wasn’t sure whether Penrose had really convinced me of anything. As best as I could tell, he did not think true artificial intelligence could be created – but I wasn’t sure on exactly what grounds.
Since then, I have been much more interested in human intelligence – specifically, in child development, as I became a mother. When my younger son was deemed to have mild autism, I naturally became particularly interested in all the issues related to that neurological disorder. This was practical, and had significant real-world applications that mattered to someone close to me. Artificial intelligence was for people with far more time and interest than I had for speculative research.
Still, I retain enough interest in the subject to eagerly read an article on futurity.org on Why computers crash but we don’t. Researchers found that the control networks in both the E. coli bacterium and the Linux operating system are arranged in hierarchies. But while Linux (typical of computer systems) has many top-level routines controlling a few generic functions at the bottom, E. coli has a limited number of regulatory genes at the top controlling a broad base of specialized functions at the bottom.
Since I understand how computers work better than I understand biology, I’m not sure quite how that works for the bacteria. But I do understand that it is the broad base of specialized functions that makes even a simple organism like E. coli more “robust” than a computer system. In computer terms, and perhaps in biology terms, “robust” means the ability to keep on functioning in spite of some errors. A robust system doesn’t crash, it just produces an error message (or a pain message, for our bodies) and keeps on with the task at hand. It may be able to even quarantine the problem area.
It’s not easy to produce robust software, partly because you have to think of all the things that could possibly go wrong (user error, hardware failure, interference from another program), and partly because that takes time and time means money and there is never enough time or money to do the programming as thoroughly as one might like. It’s also not the “fun” part of programming, as it’s not making the system do anything, it’s just proactively preventing the system from not being able to do what it should do.
Organisms don’t have that drawback. Either you believe that they were designed by God to work the way they do, and He is never at a loss for time or money (and doesn’t need either to come up with a perfect design). Or you believe they were designed by evolution, tiny step by tiny step, with only the good designs surviving. In any case, they have a great deal of redundancy, so that one function can fail and another can adapt to make up for the missing one. Organisms are incredibly good at self-healing. Eventually they do “crash” (i.e. die), but hardly as often as most computer systems.
Thinking about that comparison between brains and computers, I looked for more articles on the subject. There’s a great deal that’s been written on the subject, but I found one particularly fascinating article in The New Atlantis. I haven’t even finished reading it yet, as I decided I wanted to write this post. But it does a very good job at explaining the basics of how computer logic works, in its simplest terms. Read this, and you’ll see why computers are considered very fast but very stupid.
Perhaps someday there will be artificial intelligence (though I still doubt it). But it won’t be based on the kind of computers or computer programming that we know today.