AI, The Uncanny Valley & The World’s First Cyborg

Over the past few months the news has been awash with stories concerning the rapid development of AI. Just recently, in a study conducted by Professor Selmer Bringsjord, a robot passed a self-awareness test that should have only been possible for humans – this alone stands as a landmark within the field of robotics and AI.

Naturally, these developments have raised concerns over the future of AI and what this fundamentally could mean for humans, not only socially, but economically and politically. With the creation of artificial intelligence comes unclear, and ever-shifting ethical boundaries – when does AI cease to be machinery and instead become recognisably human? This is an interesting, but also scary question that in many is impossible to answer. Some will argue that AI will always remain AI, that even the most developed piece of technology will always have something missing that is fundamental to the human spirit. As AI learns to be self-aware and develops feelings, others strongly disagree and argue that this is the point at which we must treat AI differently – it has a mind, and thus it has rights. In other words, AI looks set to be an ethical and legal minefield in the future.

But before we get ahead of ourselves, what about those that are already using technology as an extension of themselves – chimeras, or rather in this instance, cyborgs. How do we recognise these people? And why is our opinion different towards cyborgs, than it is towards AI?

Below is a video of Neil Harbisson, a self-declared cyborg who has a surgically implanted visual scanner that allows him to hear colour. Harbisson describes the antenna as a body part, and actively encourages others to extend their senses and knowledge by becoming cyborgs.

The way that Harbisson explains colour and sound is incredibly fascinating. He can create sound portraits, feel colours and detect the infra-red spectrum which is impossible for a human.

So why is it that, although more than a tad strange, Harbisson does not feel threatening. Perhaps it is because we feel as though we can appeal to his human nature, that the ‘machinery’ is merely a part of him but at his core is a human that can be reasoned with, a person with whom we may have shared experience. Now compare this to AI – like cyborgs they are (or will be, in the future) both machine and human, similar to cyborgs. Suddenly they feel threatening, dangerous and something to be treated with caution. Why? This comes down to substance – by this it’s meant that cyborgs are genetically human but have adopted AI elements, but for AI in the future, they are fundamentally bits of machinery that have adopted human elements. The worry will always be that this machinery is dominant part of AI, that it could override the human element at any point.

These concerns are also exacerbated by AI aesthetic design. A leading professor in the robotics field, Professor Masahiro Mori, first identified the concept of the ‘uncanny valley’ in AI, though it was not termed specifically as such until Jasia Reichardt published Robots: Fact, Fiction, and Prediction in 1978. The ‘uncanny valley’ refers to the supposition that as a robot is made to look more human, the observers’ response to the robot will become increasingly positive until a point at which the observer begins to feel unnerved or repulsed by the robot’s human appearance. As it becomes increasingly distinguishable from a human, the observers’ response is increasingly positive once more.

Uncanny Valley Graph

A graph depicting the ‘Uncanny Valley’

This hypothesis is entirely plausible. Think about it, there is something unsettling about the Terminator and I, Robot, not to mention the ‘synths’ of Channel 4’s Humans.

Whilst these characters are at this point fictional, they may well be the future in some form or another. So, should we be worried that one day AI might supersede humankind? It is a valid concern, and one that should not be taken lightly. As AI is developed, it’s not hard to see why some believe we are playing God with the creation of life. With this comes a lot of responsibility, and we might one day find ourselves accountable for developing something that cannot be so easily contained, if at all.

This is for the moment entirely hypothetical, and for the most part a largely negative view of how AI could benefit us. Used appropriately, AI has the potential to take on jobs that would otherwise be dangerous to humans, for example working with nuclear material or radioactive substances. One day they might even be of benefit to those with care needs, allowing people to remain independent with the help of AI around the house.

Ultimately, AI will be here to stay – but humankind has always been drawn to industry, development and experimentation, the only difference being now we might one day create something in our likeness…

In other words, yes AI will change the boundaries of what it means to be human, how the legal system works and society in general, but let’s not write it off completely – instead it seems only fair that we approach AI’s development with appropriate caution, but also with ambition and enthusiasm. This is the future we’re talking about after all!

Advertisements
This entry was posted in Everything Inbetween, Home, World and tagged , , , , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s