A fair bit of my science-fiction work has to deal with, broadly, issues of futurism. I just got done reading A Deepness in the Sky, and I liked it tolerably well (in the long run, I suspect my biggest problem with Vinge is his anarcho-capitalism; I sorta like it because he's addressing what is one of my biggest pet peeves amongst sci-fi writers - that while they'll spend a hundred thousand words on technobabble most can't be arsed to talk about what post-democratic governments look like . . . indeed, my sci-fi settings have political systems that are actually backwards, stuff like monarchies; the problem is that I think anarcho-capitalism is so deeply and obviously stupid that anyone can take it seriously, and, yet, many in the sff set do precisely that) and the writer of the book, Vernor Vinge, is one of the proponents of a technological singularity.
It's an interesting bit of futurism (to the extent it is futurism), but I think that Vinge is doing what many futurists have done before (and many will continue to do, no doubt also including myself), which is basically attributing to the future the qualities of the present but merely in a more energetic form. So, in the 50s, when futurists (which is a term almost identical with "science-fiction writer" at the time) discovered lasers what they did was project that technological innovation into existing technology - the future would have ray guns! Of course, what actually did happen with lasers is nothing like that. We found that, instead, that lasers were much more useful as sensors, information storage, things like that. Many of us use lasers every day! In CD players, video game consoles, DVD players, blah, blah, blah. Lasers as ray guns? Still waiting on that one, and are likely to be waiting quite a while.
So, the idea behind an intelligence singularity is that some day a computer will be so powerful it'll be smarter in every meaningful way than any biological human. Since it is more intelligent by definition, it'll be able to make even more intelligent machines that we can, and those machines will make even MORE intelligent machines, etc., etc. The scenarios on this range from doomsday-esque Terminator-like scenarios (except the computers win because they will be, then, by definition more intelligent than humans) to Utopian fantasies where super-intelligent computers see to our every need.
I think that's, basically, doing what folks in the Fifties did with lasers but with computers, instead.
I mean, I'll out the first flaw - that human intelligence is singular, itself. Well, human intelligence is now, and always has been, a network. A technological singularity won't start until a technological system is more intelligent than the then existing human intelligence network. It is still, of course, an open question if we're smart enough to make seed AI in the first place. But, those little caveats aside, I suspect we are.
But, y'know, I think the idea of an intelligence singularity is a very crude sort of futurism. Like with lasers, or computers, or the radio, what ends up happening with them will likely be pretty . . . different than what we imagine. For instance, and I think this is non-trivial, as I said, human intelligence already exists in a network. Right here, right now, I'm actually using that network. The network started before we were even human - what animal species does not have some form of communications network? Artificial intelligence will merely be adding onto the system of which we're already a part - and it'll be designed to do that. The technique that the IS people are using seems to me analogous to the laser-ray gun situation: that the future will be like the present, just faster and with more energy. We are thinking that the goal of AI is to have computers . . . engage in human style thought, just do it much better than we do.
I doubt that'll be the case. Technological development goes into weird places. No ray guns, but dig that shiny new HD-DVD player. I suspect it'll be that way with AI. Rather than just doing what modern intelligence does better than modern intelligence, I suspect it'll go over in interesting tangents, and I've said the main one: it will be used to augment the existing human information network in novel and interesting ways. (Furthermore, AI will pick up the biases of their creator's behavior. A fundamentalist religious AI might not be the first AI ever made, but it's likely to be the second, with stuff like literal interpretations of religious work built into its architecture and low-level programming. AI might well be constrained by the irrationality of human belief system . . . like I said, tho', it's an open question if we're smart enough to pull this off!) I don't think that just proposing that machines in the future will do things that happen not much better and faster than happen today is very useful, because it is likely to be wrong.
On the other hand, we need to talk, I think, about futurism. This is a current and quite awful problem. I mean, for instance, right now the lack of intelligent futurism is creating a world wide global climate change event. Since almost no one bothered, at the early stages of the industrial revolution or, really, at all very seriously until recently, to talk about the climatic changes inevitably wrought by world-wide industrialization we're now on the verge of a very nasty, possibly Black Death-like problem. If people had been interested in accurately forecasting the changes wrought by industrialization and the like, global industry would look much different than it does. The same should be true of any new technology! But it isn't. So, with biological engineering, virtually every corn plant in the world is now genetically altered as pollen from genetically modified plants go to unmodified plants. Great going. We have permanently modified the DNA of corn. That was not, I should add, the plan, but it's what happened, and even though some people did try to stop it, it went ahead anyway. And there are some fairly intense technologies that could happen in the next couple of decades - AI, radically improved genetic engineering, and the unlimited promise of nanotechnology.
Of course, some people are working on it, like The Singularity Institute of Human Intelligence. The trick here is we need to start listening. But more than that, I am calling for more ingenuity in talking about futurism. I think we need to go beyond the idea that the future will be like the present with more horsepower! We need to create learn the hazards (particularly) of a given technology before we unleash it on the world. We need to ask ourselves questions like "what will seed AI mean to us?" Not just in the sense that they might become our computer overlords (benevolent or not), but also in the sense of, say, what will a post-labor world look like? What happens when machine intelligences destroy all human labor value?
I am, of course, optimistic about these things. I think our computer overlords will be benevolent. Indeed, because I suspect that they'll be part of the information network we already possess, I suspect we're just going to merge with them. We'll think what they're doing it so spiffy that we'll want to do it, too, and we'll suss out a way to do it! I don't think it'll be the computer overlords caring for us like we were children, or controlling us like we were cattle, or destroying us like we were rabid dogs - I think they will be us.
I also think that we should - and this is certainly what organizations like the Singularity Institute of Human Intelligence is involved in - take steps to guide, thoughtfully and with some care, where we want our technology to go. We do not, after all, want to create a computer tyrant by accident! More than just trying to predict the future, I am calling on people to understand that we are also making the future. And if we want human beings to have a meaningful say in this future, we will have to make a future where they have a meaningful say. Which is, I think, something that we don't really want to address, but it is perhaps the biggest social problem facing the developed world: advances in travel, communication and automation are are showing more and more clearly that there is less and less meaningful work to be done. In the US, for instance, three in four people work in some service industry. It won't take a seed AI to replace most of those workers, too. (I suspect that most of them could be efficiently replaced right now, if we had a mind to do it - the problem would be an engineering problem, not a theoretical or technological one.) But rather than the future filled with hypercompetent people that one sees in, say, most science-fiction literature, what we are instead creating is a huge body of poorly trained servants. Not precisely a shiny future. But, like with global warming, this is happening because no one is bothering to seriously consider the consequences of technological development or try actively to guide human development into a world where we, in fact, all are the hypercompetent future people seen in sci-fi novels.
So, as it developed, this post is two-fold. The first is to call on futurists to think beyond the concept of the future being "the present by harder and faster", and in the second case for people to take futurism seriously - because we need to do it! Because we have not done it, we're on the verge of a huge labor problem, a huge energy problem and a huge environmental problem. We really need to start planning for the future now, and we need to be aware of the extent to which we actually get to decide what kind of future we will have.
Sunday, November 4, 2007
Intelligence Singularity, Futurism, Some Rambling Thoughts
Posted by Unknown at 1:42 AM
Labels: futurism, science, science fiction, technology, writing
Subscribe to:
Post Comments (Atom)
1 comment:
The whole point of the Technological Singularity is that it will not be like the present -- it will be such a total discontinuity in the human development process that humans as we exist today cannot imagine what will follow, except perhaps in very general terms. Hence the term "Singularity".
I'm not familiar with Vinge's work, but Kurzweil speaks of the Singularity as resulting from the fusion of machine and human intelligence -- really incorporating the former into the latter, increasing the latter's capabilities trillions of times over. The superior artificial intelligences of the future won't be our overlords. They'll be us.
Post a Comment