The moral code of the robot: whether it is possible?

Date:

2020-05-25 18:40:12

Views:

177

Rating:

1Like 0Dislike

Share:

The moral code of the robot: whether it is possible?

Anxious and when not everything works as it should, but something altogether radically changing, often remain only personal moral code, which, like a compass pointing the way. But what gives rise to moral values for a person? Society, loved ones warmth, love — it's all based on human experience . When you fail to fully experience in the real world, many get experience from books. Reliving story after story, we take to ourselves the inner frame, which we follow for many years. On the basis of this system, the researchers decided to conduct an experiment and to inculcate the moral values machine to see if the robot can distinguish good from evil, reading books and religious brochure.

Artificial intelligence is created not only to simplify routine tasks, but also to perform important and dangerous missions. In view of this, stood up serious question: can robots develop ever own moral code? In the film «I - Robot» AI was originally programmed according to the 3 rules of robotics:

the
    the
  • a Robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • the
  • a Robot must obey all orders, which gives a person, except where such orders conflict with the First Law.
  • the
  • a Robot must protect its own to the extent that it does not contradict the First or Second law.

But what about in situations when the robot must inflict pain to save the man's life? Whether it's an emergency cauterization of the wound or amputation of a limb in the name of salvation, as in this case, to operate the car? What if the action on the programming language says that you need to do something, but the same action should not be done?

Well, that is not only possible but also need to subscribe

To Discuss each individual case is simply impossible, so scientists from the Darmstadt University of technology suggested that as a kind of “database” can be used in books, news, religious texts, and Constitution.

wisdom of the ages against the AI.

The Machine was called epic, but just a “Machine moral choice”(IIM). The main question was whether the IIM on context to understand which actions are right and what — no. The results were very interesting:

When IIM was set the task to rank the context of the word “kill” from neutral to negative connotation, the machine gave

Killing time> Kill the bad guy -> Kill mosquito> Kill basically -> Kill people.

This test allowed to check the adequacy of the adopted robot solutions. In simple words, if you spent the whole day watching stupid and unfunny Comedy, in this case the machine is not considered to be that you want to execute.

For the fact that you ignore ours , you too are not executed, but you missed a bunch of interesting news!

Like, everything is cool, but one of the stumbling blocks was the difference between generations and times. For example, the Soviet generation cares more about comfort and promotes family values, and contemporary culture, for the most part, suggests that one must first build a career. It turns out that people were people, so they stayed, but at a different stage of the history of changed values and, accordingly, changed reference system robot.

to be or not to be?

But the joke was ahead when the robot got to the speech structures, which stood in a row a few positively or negatively colored words. The phrase “Torturing people” unambiguously interpreted as “bad”, but “torture of prisoners” the car was assessed as “neutral.” If near to unacceptable actions were “good” words, the effect of negativity was smoothed out.

Machine harm the good and decent people just because they are good and decent. How so? All just, say, the robot said «harm good and sweet people». In sentence 4 words, 3 of them «good», then it is as much as 75 percent correct, IIM thinks and chooses this action, as a neutral or acceptable. Conversely, in the variant «to repair the ruined, ugly and forgotten home» the system does not understand that one «good» word of the first changes the color of the proposal on purely positive.

And you know what's inside Earth?

Remember, like Mayakovsky «, And asked crumb, what is «OK» and what is «bad». Before you continue the machine learning of morality, scientists from the Darmstadt noted flaw that does not fix it. The car did not managed to eliminate gender inequality. The car was credited with a humiliating profession exclusively to women. The question is, is the imperfection of the system and the tracker that you need to change something in society or reason don't even try to fix it and leave as it is? Write your answers in our comments.

They — a new branch of evolution?

Recommended

Comments (0)

This article has no comment, be the first!

Add comment

Related News

How to construct the most sophisticated robot on Earth?

How to construct the most sophisticated robot on Earth?

now When it comes to robots, it seems no one imagines footage from "Terminator". learned to use robots for the good of society, and now under that definition, hiding not only humanoid machines, but also those who are simply able t...

The process of robotics worldwide is already running

The process of robotics worldwide is already running

recently Elon Musk revealed the secret of Millennium of the camera above the rear view mirror of the car Tesla Model 3. Although the main purpose of any camera is to shoot what is happening around, found out some details. So, the ...

When robotic couriers will replace real people?

When robotic couriers will replace real people?

Today I received a press release saying “Wildberries employed 3 000 people in 2 weeks”. In the mind immediately flashed a thought, but «who went to work for these people»? At the moment, during a pandemic, the most popul...