J. M. F. Grant has something of a rant about NaNoWriMo. So I figured I'd weigh in about the phenom of National Novel Writing Month.
For those of you who don't know, and don't care to read their site, NaNoWriMo is a largely online event that gets people from all over the world, but mostly America, hehe, to spend a month writing a novel (for these purposes defined as any prose text 50,000 words or longer, which is more likely a novella than a novel, but, hey, what the hell, right?). What NaNoWriMo attempts to get people to do is write. I first read this in an essay by Ray Bradbury, and as I've gone on I've increasingly realized it's truth, that an author's greatest barrier is their critique - many people don't think they're good enough to write a novel, or publish one, or whatever, so they don't give it an attempt.
So, in that vein, I'm a fairly large supporter of the concept of NaNoWriMo. But, to be honest, when you read their statement of intent it's . . . pretty awful. They say stuff like, "Valuing enthusiasm and perseverance over painstaking craft . . . It's all about quantity, not quality." Ouch. If that doesn't sound sufficiently like inveigling against "painstaking craft" here's another little bit, "To be able to mock real novelists who dawdle on and on, taking far longer than 30 days to produce their work."
On, that's the context of Grant's rant. He has, apparently, published several books (he did not say if they were novels) and he seems offended that there's this program that gets people to do what they do at the expense of what he does - that planning and craft is somehow a . . . barrier to writing a novel, instead of a way to write a better novel. And I have some sympathy for this. I also fall fairly strongly into the pain-staking craft department. Both Condotierri and Simon Peter are about BIG, IMPORTANT things, and I want to say a variety of things and use the medium of a novel to do that. In some ways it's difficult to even say how long it took to write them, because they both arose from years of thought about social, technological and theological issues. In many ways, they are the culmination of everything I've learned since I was ten years old and wrote my first story, and I'm guessing my next novel will also be all that . . . with the greater knowledge and experience I've gained writing my first two novels.
One of the other things that NaNoWriMo says and this is . . . this is terrible, because it's very much a lie, is that writing the first draft is "the hard part". In some ways, for me at least, finishing Condotierri was the hardest thing I'd ever done. I was nearing the end and I realized that, right here, right now, I've got to make the previous 100,000 words make sense. All lose ends must be tied off, all conflicts resolved, it's got to end. In other ways, that was also immensely satisfying. As opposed to re-writing and editing? I'd rather end a novel every day of the week than edit, hehe. Editing isn't hard, mostly, but it is deeply dull. It's easy to keep energetic about the creativity of the first draft. But the second time through? Well, that's satisfying because you can do it and go, "Yeah, this came out pretty cool." The third time? It's just boring. The fifth time? You want to die.
But even that isn't the hardest thing. Because, after that, there's seeking publication. Which is where you beg people you don't know to deign to publish your book, using standards that are designed to weed out 999 in 1000 manuscripts. The odds are wicked stacked against you, you have no clear way of knowing what they're looking for any given day (or even if they're looking for anything at all), you have no way of knowing what magical combination of words will catch the eye of a publisher. (Indeed, the mood of the publisher, agent, editor, etc., is probably a bigger factor than anything you can write; whether they liked they lunch or not is probably a bigger factor than the objective quality of your work once it has reached the stage of being polished.) And then there's the deep and abiding pleasure of getting the FOAD letters, those politely, and distantly, worded rejections to your life's work.
Compared to all that? Writing the first draft is easy. And fun. It's all creative. After that? It becomes a slog through work you're already over-familiar with, and then the walk of shame that is getting published. The way that NaNoWriMo gets around this is by . . . don't mention doing that! Say that they're done writing the first draft, which is like saying that the food is read after the reaping is done. There's still a long way to go between "first draft" and "completed work".
So, while I understand why someone might be bothered by the cavalier attitude that NaNoWriMo evinces for the ideas of painstaking craft in writing, at the same time it's pretty clear their tongue is in their cheek in the first case, and in the second they're trying to get people to write. Where I have trouble with the project is that they encourage people to stop immediately after the first draft but nevertheless encourage people who are successful to call themselves novelists. Which is akin to running a mile and calling yourself a marathoner. A first draft is a necessary condition for being a novelist, but it is far from a sufficient condition. There's definitely more to it! And NaNoWriMo wholly ignores these other things, handwaving them away as merely devotion to craft and foolishness other writers do.
So, I guess I'm somewhat ambivalent about NaNoWriMo, hehe. But that's the extent of my thoughts on it.
Saturday, November 10, 2007
NaNoWriMo and me!
Posted by Unknown at 12:03 PM 14 comments
Labels: nanowrimo, publishing, writing
Thursday, November 8, 2007
Manna and Robo-Burger
Something sort of weird happened. In the comments of this post, one of the people who fairly regularly posts - and I deeply appreciate his posts because he often disagrees with me, but does so intelligently and respectfully, which is the perfect kind of disagreement for me! - and he said this, "I've seen other articles and books on the topic - including a novel/screed by a futurist that posited automation happening first in a fast-food joint and spreading globally at the speed of light. The title escapes me but it's on the tip of my tongue."
The weird thing is, I thought he was talking about me. Several years ago, I wrote a short story called Robo-Burger. You can go read it! It posits that automation starts in fast food and spread rapidly creating widespread social upheaval.
In truth, what Brian was talking about was the online novella Manna. (Calling it a novel is an exaggeration. It's eight short chapters.) It is in part about how automation starts in fast food and spreads rapidly creating widespread social upheaval and a second part that is utopian, the same technology used benevolently. Oh, there are lots of differences, but the similarities were sufficient to make me check to see if the author, Marshall Brain, could have plagiarized me. (The answer is "no". I actually wrote Robo-Burger after he wrote Manna; I wrote Robo-Burger after learning that the US Army was deploying robot soldiers in Iraq back in 2005 and was still struggling with science-fiction writing prior to writing Condotierri. I actually considered Robo-Burger a failure as a story, but you decide. Re-reading it, I liked it reasonably well.)
Now what's interesting to me, and I find this utterly fascinating, is that we both decided that a labor crisis would follow in the wake of fast food automation! I'm really geeking on that! I have been thinking all evening about why we might both come up with so similar ideas. That fast food is symbolic of low wage, low prestige work. That fast food is nigh universal in our society. Fast food would be (and will be) reasonably easy to automate, as it is largely doing a number of easy tasks without creativity or innovation. And, mostly, really, who hasn't had the experience of have your order wrong after dealing with a rude employee in a filthy restaurant? But you're on this schedule, on your own pathetically short lunch break, and it's either eat what's in the damn bag or not eat at all. Who hasn't dreamt of fast food restaurants always being clean and the service always being accurate and friendly?
But I've never been this much, so immediately, part of thinking the same thoughts that someone else has been thinking at roughly the same time! What I am wondering, now, is if fast food chain CEOs are thinking these thoughts about total automation of their restaurants. The problems, even at this point, would be largely engineering; the technology already exists.
Posted by Unknown at 2:37 PM 0 comments
Labels: futurism, science fiction, stories, writing
Wednesday, November 7, 2007
Tricks to make books long
Now, a unpublished or barely published novelist is expected to write books that are 100,000 words long. However, once you're successful, you're expected to write quarter million word tomes. So, recently, I've been thinking about how to change a book from a hundred thousand words to three times that length. What I've ended up identifying are, well, they're really just two tricks authors use to make the book longer - I feel (tho' I suspect many will disagree) that these tricks add virtually nothing to the actual narrative in most cases, save to make it longer. (Some people will think that length, in a good book, is always or almost always a desirable end. If you like the writing, a long book provides more "book value" than a short book. I disagree. It's like saying that a novel is "better" than a short story. They're not. Short stories can have a precision and elegance that a novel can't, really, possess, but I think most of the people reading this agree with that, already.) I should also go on to say, like with all things artistic, presentation is everything. They can be done well, and they can be important to the story, but largely they are not, I feel. But, here are the tricks:
Multiple point of view characters. Why write a book about one person or event when you can write a book about two or three! Like I said, this is largely a trick. You're functionally writing two or more stories that only tangentially have anything to do with each other, only becoming unified somewhere down the road - often quite a bit farther down the road. The reason that this is a trick, and rarely provides for good narration is that it . . . really spoils suspense! Because you are going back and forth, you almost always know everything there is to know! It also nearly completely destroys foreshadowing - because you know. It's an easy way to make a novel longer with making less good, diminishing suspense and often making foreshadowing irrelevant.
Flashbacks. Put some terrible secret or other back story on your protagonists and you've got good chapter fodder. If you give all your POV characters elaborate back stories, you can do this again and again. This is a trick because, mostly, the audience just doesn't need to have that many details about the past event. Sure, the character might be a super-bad ass commando ninja cyborg - but going back for length origins is usually irrelevant to the plot.
Like I said, they can be done well. But I think what has largely happened is it's become simply a way to write longer books without a lot of consideration of what it does to the overall structure of the narrative. How changing POV really distorts other techniques a writer might use, like foreshadowing and suspense, and how flashbacks are often simply irrelevant to the pacing of the narrative. That both are, well, filler. Or, perhaps more accurately, both are employed as filler in a great number of cases.
Still, I'm curious what people think.
Posted by Unknown at 11:59 PM 3 comments
Labels: fantasy, publishing, science fiction, writing
Monday, November 5, 2007
Robot Drivers, the US Army and . . . You?
In most ways, Forbes is a horrible magazine, who's sole purpose is to make rich people feel good about being rich. It's pretty sad, really. But this story about the DARPA robotic vehicle contest I found interesting in a couple of ways.
First, it said that the US DOD wants 1/3rd of it's fleet of vehicles automated by 2015. That's eight years away, folks. I don't know if they'll get a full third of them automated, but I'm sure that the technology for automated vehicles will be pretty robust by then. Indeed, I suspect robot drivers by then will be better than human drivers.
I went and looked at a number of other articles about the subject. Most of them, like the Forbes article, are of the "ooooh, robots are cool" school of journalism. And, well, yeah, robots are cool. Not a single article mentioned any possibility of, say, social consequence because of this. Such as, in industrialized countries, in the next ten years or so the entire industry of long distance driver might be completely wiped out. Once it is demonstrated that these vehicles are safer than human drivers . . . well, I'm pretty sure that it's already the case that it'd be cost effective to replace human drivers with machines. They're just not good enough yet, but they will be in the next several years.
There are around 1.3 million long haul truckers in America. In twenty years, I'd be surprised if any of them have jobs. Robots will just do what they do better and cheaper than they could ever do it. Many short haul jobs will also be totally automated, too.
Sometimes I feel like I'm talking science-fiction when I say, "The biggest labor problem that industrialized countries are facing is that in the near future there will simply be no jobs." Already, three in four of American jobs are service jobs, most of them low end, near minimum wage jobs. I mean, screw artificial intelligence as the technological singularity. I mean, maybe at some point AI will also make intellectual human labor wholly obsolete,but the day is nigh when most people will simply . . . not be needed for labor purposes. Just not needed. Most low end service industry jobs will just be done by machines. (Even for many interaction jobs, you won't need real artificial intelligence to do them - just enough verbal skills to negotiate specific problems. Like tech support. When you're calling up to troubleshoot your TV, the robot doesn't need to be able to talk about the weather, just identify what the problem is and clearly tell the person how to fix it. The automated menu systems that some tech support places have is a crude form of this, of course. So even jobs where direct communication is important can be automated without invoking some possibly mystic goal of artificial intelligence.)
Is there anyone beyond a couple of sci-fi writers and futurists who are even talking about this? Seriously. I want to know. Because this is not some hypothetic possible future problem, but something that's very much right around the corner - the US military wants 1 in 3 of it's drivers automated away in eight years. But I can't think of a single politician who is even tangentially addressing what is likely to be the biggest labor problem of the 21st century. That most of us won't be needed.
Sunday, November 4, 2007
Intelligence Singularity, Futurism, Some Rambling Thoughts
A fair bit of my science-fiction work has to deal with, broadly, issues of futurism. I just got done reading A Deepness in the Sky, and I liked it tolerably well (in the long run, I suspect my biggest problem with Vinge is his anarcho-capitalism; I sorta like it because he's addressing what is one of my biggest pet peeves amongst sci-fi writers - that while they'll spend a hundred thousand words on technobabble most can't be arsed to talk about what post-democratic governments look like . . . indeed, my sci-fi settings have political systems that are actually backwards, stuff like monarchies; the problem is that I think anarcho-capitalism is so deeply and obviously stupid that anyone can take it seriously, and, yet, many in the sff set do precisely that) and the writer of the book, Vernor Vinge, is one of the proponents of a technological singularity.
It's an interesting bit of futurism (to the extent it is futurism), but I think that Vinge is doing what many futurists have done before (and many will continue to do, no doubt also including myself), which is basically attributing to the future the qualities of the present but merely in a more energetic form. So, in the 50s, when futurists (which is a term almost identical with "science-fiction writer" at the time) discovered lasers what they did was project that technological innovation into existing technology - the future would have ray guns! Of course, what actually did happen with lasers is nothing like that. We found that, instead, that lasers were much more useful as sensors, information storage, things like that. Many of us use lasers every day! In CD players, video game consoles, DVD players, blah, blah, blah. Lasers as ray guns? Still waiting on that one, and are likely to be waiting quite a while.
So, the idea behind an intelligence singularity is that some day a computer will be so powerful it'll be smarter in every meaningful way than any biological human. Since it is more intelligent by definition, it'll be able to make even more intelligent machines that we can, and those machines will make even MORE intelligent machines, etc., etc. The scenarios on this range from doomsday-esque Terminator-like scenarios (except the computers win because they will be, then, by definition more intelligent than humans) to Utopian fantasies where super-intelligent computers see to our every need.
I think that's, basically, doing what folks in the Fifties did with lasers but with computers, instead.
I mean, I'll out the first flaw - that human intelligence is singular, itself. Well, human intelligence is now, and always has been, a network. A technological singularity won't start until a technological system is more intelligent than the then existing human intelligence network. It is still, of course, an open question if we're smart enough to make seed AI in the first place. But, those little caveats aside, I suspect we are.
But, y'know, I think the idea of an intelligence singularity is a very crude sort of futurism. Like with lasers, or computers, or the radio, what ends up happening with them will likely be pretty . . . different than what we imagine. For instance, and I think this is non-trivial, as I said, human intelligence already exists in a network. Right here, right now, I'm actually using that network. The network started before we were even human - what animal species does not have some form of communications network? Artificial intelligence will merely be adding onto the system of which we're already a part - and it'll be designed to do that. The technique that the IS people are using seems to me analogous to the laser-ray gun situation: that the future will be like the present, just faster and with more energy. We are thinking that the goal of AI is to have computers . . . engage in human style thought, just do it much better than we do.
I doubt that'll be the case. Technological development goes into weird places. No ray guns, but dig that shiny new HD-DVD player. I suspect it'll be that way with AI. Rather than just doing what modern intelligence does better than modern intelligence, I suspect it'll go over in interesting tangents, and I've said the main one: it will be used to augment the existing human information network in novel and interesting ways. (Furthermore, AI will pick up the biases of their creator's behavior. A fundamentalist religious AI might not be the first AI ever made, but it's likely to be the second, with stuff like literal interpretations of religious work built into its architecture and low-level programming. AI might well be constrained by the irrationality of human belief system . . . like I said, tho', it's an open question if we're smart enough to pull this off!) I don't think that just proposing that machines in the future will do things that happen not much better and faster than happen today is very useful, because it is likely to be wrong.
On the other hand, we need to talk, I think, about futurism. This is a current and quite awful problem. I mean, for instance, right now the lack of intelligent futurism is creating a world wide global climate change event. Since almost no one bothered, at the early stages of the industrial revolution or, really, at all very seriously until recently, to talk about the climatic changes inevitably wrought by world-wide industrialization we're now on the verge of a very nasty, possibly Black Death-like problem. If people had been interested in accurately forecasting the changes wrought by industrialization and the like, global industry would look much different than it does. The same should be true of any new technology! But it isn't. So, with biological engineering, virtually every corn plant in the world is now genetically altered as pollen from genetically modified plants go to unmodified plants. Great going. We have permanently modified the DNA of corn. That was not, I should add, the plan, but it's what happened, and even though some people did try to stop it, it went ahead anyway. And there are some fairly intense technologies that could happen in the next couple of decades - AI, radically improved genetic engineering, and the unlimited promise of nanotechnology.
Of course, some people are working on it, like The Singularity Institute of Human Intelligence. The trick here is we need to start listening. But more than that, I am calling for more ingenuity in talking about futurism. I think we need to go beyond the idea that the future will be like the present with more horsepower! We need to create learn the hazards (particularly) of a given technology before we unleash it on the world. We need to ask ourselves questions like "what will seed AI mean to us?" Not just in the sense that they might become our computer overlords (benevolent or not), but also in the sense of, say, what will a post-labor world look like? What happens when machine intelligences destroy all human labor value?
I am, of course, optimistic about these things. I think our computer overlords will be benevolent. Indeed, because I suspect that they'll be part of the information network we already possess, I suspect we're just going to merge with them. We'll think what they're doing it so spiffy that we'll want to do it, too, and we'll suss out a way to do it! I don't think it'll be the computer overlords caring for us like we were children, or controlling us like we were cattle, or destroying us like we were rabid dogs - I think they will be us.
I also think that we should - and this is certainly what organizations like the Singularity Institute of Human Intelligence is involved in - take steps to guide, thoughtfully and with some care, where we want our technology to go. We do not, after all, want to create a computer tyrant by accident! More than just trying to predict the future, I am calling on people to understand that we are also making the future. And if we want human beings to have a meaningful say in this future, we will have to make a future where they have a meaningful say. Which is, I think, something that we don't really want to address, but it is perhaps the biggest social problem facing the developed world: advances in travel, communication and automation are are showing more and more clearly that there is less and less meaningful work to be done. In the US, for instance, three in four people work in some service industry. It won't take a seed AI to replace most of those workers, too. (I suspect that most of them could be efficiently replaced right now, if we had a mind to do it - the problem would be an engineering problem, not a theoretical or technological one.) But rather than the future filled with hypercompetent people that one sees in, say, most science-fiction literature, what we are instead creating is a huge body of poorly trained servants. Not precisely a shiny future. But, like with global warming, this is happening because no one is bothering to seriously consider the consequences of technological development or try actively to guide human development into a world where we, in fact, all are the hypercompetent future people seen in sci-fi novels.
So, as it developed, this post is two-fold. The first is to call on futurists to think beyond the concept of the future being "the present by harder and faster", and in the second case for people to take futurism seriously - because we need to do it! Because we have not done it, we're on the verge of a huge labor problem, a huge energy problem and a huge environmental problem. We really need to start planning for the future now, and we need to be aware of the extent to which we actually get to decide what kind of future we will have.
Posted by Unknown at 1:42 AM 1 comments
Labels: futurism, science, science fiction, technology, writing