Re-Writing AI In Science Fiction

I don't think they need to re-write anything. By this video it seems he believes AI will evolve to become conscious of the world around it.

This is usually what happens in films that involve AI.
 
The development of artificial consciousness is being debated by sci-fi writers as something that cannot possibly happen.

They view AI as advanced adding machines with no sense of self-awareness that can only function within a set of rules.

The science of Bionics (the marriage of biology and electronic) is listed the the Penguin Dictionary as a real branch of science. As the secrets of the Human brain become unravelled, the magic of consciousness and cognitive thought will go away and lead to new branches of science in artificial intelligence.
 
The development of artificial consciousness is being debated by sci-fi writers as something that cannot possibly happen.

They view AI as advanced adding machines with no sense of self-awareness that can only function within a set of rules.

Which sci-fi writers are these? Artificial consciousness has been one of the key elements in many sci-fi stories for decades. In fact I would tend to see it as something that's generally presented as almost inevitable by many sci-fi writers.
 
Which sci-fi writers are these? Artificial consciousness has been one of the key elements in many sci-fi stories for decades. In fact I would tend to see it as something that's generally presented as almost inevitable by many sci-fi writers.

Just make sure you distinguish between "consciousness" and "humanity." In the realm of sci-fi, good things happen if the latter is the case, bad things happen if the former is the case.
 
Just make sure you distinguish between "consciousness" and "humanity." In the realm of sci-fi, good things happen if the latter is the case, bad things happen if the former is the case.

I disagree, at least in a lot of the sci-fi I've read. Maybe more so in mainstream cinematic sci-fi, but even then I think that's too simplistic a description.

Take 'Terminator' as a mainstream example. The machines achieve "consciousness" and bad things happen. In T2, one of the machines begins to achieve some "humanity" and.. well, not exactly good things happen, but arguably better than if it hadn't happened.

So that would seem to support your argument, except - the machines were built by humans in the first place to be machines of war. The bad things that happen are a direct result of the type of humanity they were designed to model - and consciousness only changes things by letting them decide what to do rather than doing our direct bidding. They arguably achieve 'humanity' - just a side of humanity we'd rather pretend wasn't there.

And ultimately that's the thing about AI in sci-fi - it has absolutely nothing to do with the nuts-and-bolts reality of building conscious machines. It doesn't matter whether it's really possible or not. That's why it seemed odd to me that the OP would say sci-fi writers were debating it as something that can't happen - it's not the job of sci-fi writers to worry about whether something is achievable in real life. Their job is to imagine the possible outcomes if something did happen. And so AI in sci-fi is simply a tool, a mirror to allow us to step back and examine our own humanity from a different perspective.
 
There are both good and bad traits with humanity.

Remember the tagline for Outland?

"Even in space man's greatest enemy is still man."

As the Treminator told John Connor in T2, "It's in your nature to destroy yourselves."

In The Day The Earth Stood Still, on the home planet of the aliens who visited Earth, the Humans finally gave control of the military and police over to their machines because the machine don't have Human weaknesses such as jealousy, greed, hate, prejudice, glutony, fear, and vanity.
 
Last edited:
Back
Top