Apparently, good work is being done by using "deep brain stimulation" to help some seriously depressed and obsessive-compulsive people.
I know that my own depression is not the same as the kinds of depression this device works on - severe depression is the realms of utter madness, often including delusions and hallucinations, and is utterly crippling. It is ghastly. So, knowing that, even I can't help but feel some real joy at the notion that work is being done that can successfully treat depression as a purely material problem. (Because it is, of course. The "mind/body problem" is only a problem if you think there's a difference between your mind and your body. Which there isn't. "Mind/body problem" makes as much sense as "heart/body problem". Your mind is part of your body, located primarily in your brain. Duh.)
Of course, what particularly fascinates me about this is that it's a mechanical, electronic solution. You stick some wires in your head and, zap, 4 out of 6 severe patients feel better.
It makes me wonder when someone will say to themself, "Self, if it works for them, might it not work for me. Oh, I know that I don't suffer depression, but if it helps my mood, makes me feel happier, be more productive, allows me to more fully express the person I want to be, why not?" then I'll be particularly fascinated. (People already do this with mood medication; taking anti-depressants as "mood brighteners". I should note I'm mostly for this. What's wrong with people being in good moods?)
But we're all on the verge of becoming literal cyborgs. I find all of this very fascinating and await all of this with bated breath.
For the most part, I will add, I don't think that this will lead to some dystopian future with people being modified or drug addicted to become the mindless drones of the state. The reasons for this are two:
1. In the end, humans will prove more useful if they are allowed to pursue their interests where it leads them. Since machines are taking over all our physical and even some of our intellectual labor, creative labor is about all we've got left as we begin this new period. Enslaving people through these techniques would be a disaster, good for no one.
2. The techniques, themselves, will lead those who use them to benevolent conclusions. I have long wondered how much of human civilization has been the results of literally madness - how many lawmakers have been mentally ill, and how has that effected our society? I think quite a bit. (I also think that since the average life expectancy of early civilizations was about 18 years old that civilization was created by teenage boys - and shows it.) I believe that clarity of thought makes it intellectually and emotionally difficult for tyrants to be tyrants. The cruelty and stupidity of what they are doing will be clear, not only to themselves, but others, because of people's intellectual clarity.
Wednesday, May 28, 2008
Cyborgization continues - "brain pacemakers"
Subscribe to:
Post Comments (Atom)
6 comments:
I think that in the near future (the next 100 years) there will be three types of people: Humans who stay that way for a variety of reasons, cyborgs and AI's/robots.
It'll be an interesting society if the different groups can manage to live together....
I roughly agree. I don't know if it'll be 100 or 200 or 50 years, but it's coming.
I suspect that it'll be pretty easy for them to live together because at least the cyborgs and AIs will be very clever, hehe.
I think that the Cyborgs will be the largest group with their allegiance going in both directions. AI's & Cyborgs will (probably) be much smarter than humans - but I don't think that alone will make them better 'people'. I guess I'm too into the Terminator franchise to be totally cool with that much metal walking around.
I don't know that there'll be clear boundaries, really.
The thing is, tho', I think that they WILL be better people, overall. They will be more, well, rational. With that clarity of vision and thought they'll be able to better weigh decisions and, thus, make better decisions.
The only possible problem to this is the one of initial intent - what will be the hard programmed assumptions? Or, in other words, would an AI created by fundie Christians work as well as an AI created by rational humanists? Will the biases of humans be magnified in the post-human world? I don't think so, but it is something to think about, hehe.
"The thing is, tho', I think that they WILL be better people, overall. They will be more, well, rational. With that clarity of vision and thought they'll be able to better weigh decisions and, thus, make better decisions".
Oh, they'll definitely be smarter than Human 1.0 and will be more informed. They'll *probably* be more rational but I do wonder if they'll make 'better' decisions because of that - more reasonable, yes, but better.....? I'm not so sure even if I'm a *huge* fan of reason (unfortunately I'm also a confirmed sceptic).
"would an AI created by fundie Christians work as well as an AI created by rational humanists?"
I expect that a fundie AI would either self-destruct (like in Star Trek) or *very* quickly see the error in its programming & immediately reject the whole religion thing.... at least we but hope that they would!
More reasonable is better, hehe.
You don't like reason compared to, well, what? Do we even have an alternative? ;)
But it would be my sincere hope that a fundamentalist AI would show so many useability bugs (I doubt there'd be a Star Trek-ish self-destruction, alas) that no one, not even fundies, would want to use the damn thing.
But with AIs, you might well be able to construct it so it *has no choice* about religion. It MUST obey certain precepts. Hey, they'd love to be able to do it to their kids, so why not their AIs?
One of the unintended side effects will be, I think, to display their own hypocrisy, hehe. The AI will be fundamentalist much better than they could ever hope to be and they'll learn what happens when you tell a computer that "everything the Bible/Koran/Rg Veda is literally true". It'll really act according to those principles.
But my sincerest hope is that the AI would go, "What horrible programming did you bog me down with?!"
Post a Comment