A.I. vs Humanity

I have been thinking a lot about artificial intelligence. It is an area of technology I have never been altogether comfortable with. I am a trekkie, you see, so the following thoughts come to mind.

"We believe that when you create a machine to do the work of a man, you take something away from the man." Sojef, Star Trek Insurrection. 

I find that quote particularly relevant for A.I. writing assistants and mourn the loss of human creativity. Plus, as a die-hard Star Trek Next Generation fan (TNG), I have spent many hours pondering the moral implications of A.I. (Let's tackle predictions from The Terminator later.) 

For any non-Star Trek fans, indulge me. 

In Star Trek TNG, the android Commander Data is challenged to prove his sentient status and ensure his freedom rather than being labelled the property of Star Fleet and, as such, part of a labour class. In the show, Data is self-aware. He is the creation of a scientist seeking to reproduce by unconventional means. He was not created as a servant of any sort. Yet, his status continually comes under scrutiny.

Self-awareness is the ability to perceive oneself as a distinct entity with its thoughts, emotions, and experiences.Akshat Tyagi

While Star Trek is fictitious, we have a propensity to make good on our science fiction. So it is not inconceivable that in the not-too-distant, we will create a self-aware android device.

Are we fully prepared for the consequences of creating a new species? 

Are we qualified or morally advanced enough to be trusted to do so? 

And, are we sufficiently prepared to negotiate with a self-aware, artificial life-form, should it decide we are acting inappropriately or, perish the thought, obsolete?

We live in a time where, as a species, we are engaged in multiple wars with each other over land, resources, religion, power, and numerous other factors we are unable to peacefully negotiate.

We can't even save ourselves from ourselves.

In the episode The Quality of Life, a robotic device that analyses and repairs machine malfunctions acts in self-preservation, indicating an understanding of a threat to its own existence. The crew has to decide whether it is appropriate to alter its programming, to disable it, to prevent it from preserving itself rather than letting it give its life to save them.

Should an A.I. device demonstrate such behaviour, how would we react? Would we deem it our property, merely a slave to us?

Pondering these matters may seem a pointless use of energy to some. However, it's possible that A.I. may gain self-awareness in the future. And if that occurs, how will we treat it, and how will it judge us?

In TNG Emergence, the Enterprise computer reproduces, creating an offspring born of the collective data stored in its memory, contributed to by the crew through their diaries. The new entity leaves the ship once complete, and when challenged, Captain Picard comments, "If our experiences with the Enterprise have been honourable, can't we trust that the sum of those experiences will be the same?"

But do we act honourably towards A.I. now? As Dr Tom McClelland comments, does A.I. deserves rights regardless of whether it is conscious?

The other day, I became impatient with the voice assistant on my smartphone. It answered by telling me it would not respond to such comments. Since then, I have begun to act more politely and say thank you when it assists me. It responds accordingly.

Stephen Hawking once said, "I fear that A.I. may replace humans altogether. If people design computer viruses, someone will design A.I. that improves and replicates itself. This will be a new form of life that outperforms humans."

As he pointed out, artificial intelligence can evolve rapidly. As biological beings, we evolve slowly and would be unable to compete. Are we creating the dominant planetary intelligence of the future? And if we choose to pursue this avenue, should we not consider now how we treat it?

Previous
Previous

Is It Worth Voting?

Next
Next

Remembering Those Who Have Passed