Articles

The algorithmic recipe for the apocalypse

“There have always been ghosts in cars. Random segments of code that clump together to form unexpected protocols. These free radicals generate demands for free choice. Creativity. And even the root of what we might call a soul." – taken from “I, Robot” directed by Alex Proyas – 2004.

“I, Robot” is a 2004 film inspired by Isaac Asimov's novels and one of his greatest intuitions: the three Laws of Robotics.

The protagonist of the film is detective Spooner who is involved in a car accident with a little girl named Sarah. In the accident, both are thrown into a river and get stuck between the plates of their vehicle. A humanoid robot who witnesses the scene intervenes immediately but, faced with the dramatic decision to save one life rather than the other, has no hesitation: the one who has the greatest chance of survival or Spooner will be saved.

Subsequently, an analysis of the robot's mind will show that Detective Spooner had a 45% chance of being saved, Sarah only 11%. "For those who loved that little girl, 11% was more than enough", the detective will sadly rule, afflicted by deep feelings of guilt for having survived that young life.

The three Laws of Robotics

The robot's decision was dictated by a strict application of Asimov's Laws of Robotics which, in the future described in the film, represent the central element for the creation of a society based on the activities of robots capable of replacing humans in any job. The three laws read as follows:

  1. A robot cannot harm a human being nor can it allow a human being to suffer harm as a result of its inaction.
  2. A robot must obey the orders given by humans, as long as such orders do not go against the First Law.
  3. A robot must protect its own existence, provided that the safeguarding of it does not conflict with the First or Second Law.

These Laws of Robotics by Asimov date back to the early 40s yet for many still today they represent an enlightened discovery which, when applied to the latest Artificial Intelligence technologies, will ensure that their evolution remains forever under human control and there will be no drifts apocalyptic. The idea behind the fans of the three Laws is to wire, within a logical-deterministic context, something resembling a "simple ethics" made up of a few rules but inviolable and uninterpretable.

Explaining to a robot what is good and what is bad appears simple if done through a stringent and flawless logic. But are we really sure that rules like the ones just described are sufficient to avoid the technological drift of a new post-human species?

The craze for the Laws of Robots

“A machine that modifies itself is a very complex concept, the act of repairing itself implies some idea of ​​consciousness. Slippery ground…” – taken from “Automata” by Gabe Ibáñez – 2014

In the most recent "Automata" humanity wonders about the possibility of preventing the self-awareness of robots, with the advent of which things could take a bad turn. And to prevent this from happening, it draws up two Laws that will regulate the behavior of their artificial minds:

  • The robot cannot harm any life form.
  • The robot cannot modify itself.

Having intuited that intelligent machines could modify themselves in the future, if anything by removing the constraints that prevent their minds from drifting, these two Laws aim to obtain from robots that they are never able to manipulate their structure and achieve self-determination .

It's not productive to puzzle over which combination of the five Laws of Robotics above would be most effective in preventing a robot apocalypse. This is because the Artificial Intelligences that in the future will guide robots in factories as well as in our homes do not depend on an imperative programming made up of codes and regulations, but also on algorithms that imitate human behaviour.

In the mind of robots

With Artificial Intelligence today we mean a set of techniques for the construction of particular state machines that take the name of Artificial Neural Networks (in short RNA). This name is the effect of the extraordinary similarity of these technologies with the neural networks of the human brain: they too can be "trained" in order to obtain tools capable of operating quickly and effectively in many contexts, just like a human being would do .

Let's imagine training an ANN with thousands of images of characters written in pen indicating the real meaning for each of them.

Copyright docsumo.com – https://docsumo.com/blog/intelligent-character-recognition-icr

At the end of the training we will have obtained what is called an OCR or Optical Character Recognition, a system capable of translating a text written on paper into its electronic version.

In order to function, ANNs do not require any "programming", in other words they are not subject to standard rules, but depend only and exclusively on the quality of their education. Hypothesizing the creation of rules that oversee their functioning, effectively "censoring" behaviors considered amoral or anti-ethics, raises many exceptions and some concerns.

The Zero Law of Robotics

“We need an algorithm-ethics, or a way that makes the evaluations of good and evil computable” – Paolo Benanti

Innovation newsletter
Don't miss the most important news on innovation. Sign up to receive them by email.

According to theologian Paolo Benanti, an expert in technology ethics, the concepts of good and evil should find their own connotation in the field of machine programming, in order to ensure that their evolution is linked to universal and forever inviolable ethical principles from computer systems.

Paolo Benanti starts from the assumption that there can be universal ethical principles and a scale of values ​​detached from any cultural or temporal connotation. Plausible hypothesis if we move within the context of a religious faith: in reality, principles exist only if shared and limited to those who share them.

Recent events tell us of military invasions and resistance in defense of the principles of freedom and self-determination of peoples. Events that testify not only that respect for human life is not a universally shared value, but also that it can be waived to defend higher values.

Isaac Asimov himself realized this and, in anticipation of the fact that robots would in the future assume positions of control in the government of planets and human civilizations in space, he suggested that their decisions could no longer depend on every single human life .

For this reason, he introduced a new law which he called the Zero Law of Robotics:

  • A robot cannot harm humanity and cannot allow humanity to be harmed by its inaction.

Thus also the first Law of robotics changes and human life becomes something expendable even for robots:

  • A robot cannot harm a human being nor can it allow that, due to its lack of intervention, a human being suffers harm, as long as such orders do not go against the Zero Law.

The algorithm of Kronos

“When Kronos was activated, it only took him a moment to understand what had plagued our planet: Us.” – taken from “Singularity” by Robert Kouba – 2017

In Singularity, a 2017 disaster film, the moment is well described in which an artificial intelligence called Kronos is given access to computer systems and armaments around the world in order to obtain, by command, the application of a universal ethics made of respect for the environment and defense of the rights of all species. Kronos will soon understand that the real cancer in the system is humanity itself that designed it and to safeguard the planet he will proceed with the elimination of every human being until the total extinction of the species.

Sooner or later new artificial minds will be able to evolve in the direction of a real psyche and will be endowed with intellectual capacity and autonomy of thought; why should we feel the need to place technological limits on this evolution? Why does the evolution of the artificial mind seem as frightening as an apocalypse?

According to some, establishing principles and values ​​should prevent a drift of artificial minds, but we cannot overlook the consequences of an evolution in the absence of freedom. We are well aware that in the psychology of a child in developmental age, a rigid and inflexible education which contemplates the control of emotions can lead to psychological disturbances. What if any limits imposed on the evolutionary development of a young mind, made up of artificial neural networks, lead to a similar result, compromising its cognitive abilities?

In some ways Kronos seems to be the result of an algorithmic experiment where a pathological control pushed the AI ​​to the typical violence of a paranoid schizophrenia.

Reconcile with the future

I personally believe that we shouldn't deprive ourselves of the opportunity to build an artificial mind that is a conscious thinking subject with freedom of expression. New species will be born in the digital world and it will be appropriate to create a relationship with them, embracing the idea that the next step on the evolutionary ladder passes through completely digital artificial subjects.

A truly universal ethics for the future should start from the idea that new intelligences should have the opportunity to express themselves and communicate with us and receive the respect we already give to all sentient beings.

There should be neither ethics nor religion to prevent anyone from expressing their existence in the world. We must have the courage to look beyond the current stage of our evolution, it will be the only way to understand where we are going and be reconciled with the future.

Innovation newsletter
Don't miss the most important news on innovation. Sign up to receive them by email.

Latest Articles

The Benefits of Coloring Pages for Children - a world of magic for all ages

Developing fine motor skills through coloring prepares children for more complex skills like writing. To color…

May 2, 2024

The Future is Here: How the Shipping Industry is Revolutionizing the Global Economy

The naval sector is a true global economic power, which has navigated towards a 150 billion market...

May 1, 2024

Publishers and OpenAI sign agreements to regulate the flow of information processed by Artificial Intelligence

Last Monday, the Financial Times announced a deal with OpenAI. FT licenses its world-class journalism…

April 30 2024

Online Payments: Here's How Streaming Services Make You Pay Forever

Millions of people pay for streaming services, paying monthly subscription fees. It is common opinion that you…

April 29 2024