Why Technical English

Should We Fear the Killer Robots? | March 4, 2009

Compiled by Galina Vitkova

Here is another technical text about robots, which is presentable for studying. At the same time the text points at serious issues connected with military robots.

Robots have been used in laboratories and factories for many years, but their uses are changing fast. Since the turn of the century, sales of professional and personal service robots have risen sharply and make total 5.5 million in 2008. IFR Statistics estimate 11.5 million in the next two years. The price of robot manufacture is also falling. In 2006 robots cost by 80% cheaper than it was in 1990. Nowadays the robots are entering our lives in unprecedented numbers.

The U.S. will spend $1.7B on military robots

Robots in the military are no longer the stuff of science fiction. They have left the movie screen and entered the battlefield. Washington University in St. Louis’s Doug Few and Bill Smart are on the cutting edge of this new wave of technology. Few and Smart report that the military goal is to have approximately 30% of the Army comprised of robotic forces by approximately 2020.

Nowadays the U.S. military has already deployed about 5 thousands of robot systems in Iraq and Afghanistan.

The U.S. military´s killer robots must learn a warior code

Autonomous military robots that will fight future wars must be programmed to live by a strict warrior code or the world risks untold atrocities at their steely hands.

The stark warning is issued in a hefty report funded by and prepared for the US Navy’s high-tech and secretive Office of Naval Research . The report also includes a discussion of a Terminator–style scenario in which robots turn on their human masters. In fact the report is the first serious work of its kind on military robot ethics. It envisages a fast-approaching epoch where robots are smart enough to make battlefield decisions that are at present the preserve of humans.

“There is a common misconception that robots will do only what we have programmed them to do,” Dr Patrick Lin, the chief compiler of the report, said. “Unfortunately, such a belief is sorely outdated, harking back to a time when programs could be written and understood by a single person.” The reality, Dr Lin says, is that modern programs include millions of lines of code and are written by teams of programmers, none of whom knows the entire program. Therefore, no individual could accurately predict how the various portions of large programs would interact. Without extensive testing in the field the “right” behaviour of fighting robots can´t be guaranteed. The solution, he suggests, is to mix rules-based programming with a period of “learning” the rights and wrongs of warfare.

Who is blame?

If a robot goes berserk in a crowd of civilians – the robot, its programmer or the US president? Should the robots have a “suicide switch” or should they be programmed to preserve their lives?

The report, compiled by the Ethics and Emerging Technology department of California State Polytechnic University, strongly warns the US military against complacency or shortcuts as military robot designers engage in the “rush to market” and the pace of advances in artificial intelligence is increased.

A sense of haste among designers may have been heightened by a US congressional mandate that by 2010 a third of all operational “deep-strike” aircraft must be unmanned, and that by 2015 one third of all ground combat vehicles must be unmanned.

“A rush to market increases the risk for inadequate design or programming. Worse, without a sustained and significant effort to build in ethical controls in autonomous systems there is little hope that the early generations of such systems and robots will be adequate, making mistakes that may cost human lives,” the report noted.

A simple ethical code along the lines of the “Three Laws of Robotics” postulated by Isaac Asimov, the science fiction writer, will not be sufficient to ensure the ethical behaviour of autonomous military machines.

“We are going to need a code,” Dr Lin said. “These things are military, and they can’t be pacifists, so we have to think in terms of battlefield ethics. We are going to need a warrior code.”

Isaac Asimov’s three laws of robotics

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

They were introduced in Asimov’s short story Runaround (1942 or 1950)

References:

Advertisements

13 Comments »

  1. In my oppinion there’s a chance to develop relatively “safe” killer (what a wierd connection) robots. This is, however, subject to proper examination of each subsystem of killer robot (especially target recognition subsystem). Suicide switch and remote supervision should be allways implemented.

    Comment by Jiří Brabec — March 8, 2009 @ 12:50 pm

  2. To reply on response from 8th March : I don’t think so, because all robots must obey three laws of robotics. And these laws are :
    1. Robot may not injure a human being, or through inaction allow a human being come to harm.
    2. A robot must obey orders given to it by human being, except where such orders would conflict with the first law.
    3. A robot must protect it’s own existence as long as such protection does not conflict with the first or second law.
    So this is (for me) the reason, why is idea of killer robots impossible. Robots can serve in armies only as auxiliary forces.

    Comment by Jiří Linhart — March 10, 2009 @ 12:35 pm

  3. The issue I see that is spoiling up the thing the most is those giveaway money to the army. ( Why not to give it for medical purposes instead ? ) Therefore, I believe, if those robots won’t stay with just observing and are programmed also for killing people (Afganistan reference)..then, yeah, I guess it could be also dangerous to us since the recognizing technology has not been that great afaik.

    Comment by Miroslav Čermák — March 11, 2009 @ 9:17 pm

  4. In my opinion the programmers and the corporations which make the army robots try to make it safe as possible for civilian. I think the issue of the safety is very important and I don’t think the manufacturers of these robots ignore this fact. So, I think if the technology won’t be misused we shouldn’t fear.

    Comment by Pavel Jícha — March 12, 2009 @ 11:00 am

  5. Well, if someone comes up with a robot, so well programmed, that you may call it Artificial Intelligence, than you may never know how it would react in certain crucial situations.

    Programmers can’t prepare for every possible scenario, which may happen. That’s just impossible.

    Therefore, as long as robots are directly under human control or they are acting under specific orders, than I assume they are safe. If they are given a programme, which let’s them make up their own mind, than they might go “berserk” (as said above).

    In war, you can’t just say – these guys are bad, these good. Robots to me seem as another very dangerous weapon comparable to white phosphorus or nuclear missiles.

    Conclusion: Let them stay in factories, manifacturing products (cars, electronical devices, etc.) as done today, but don’t take them to war (only for scouting purposes not military).

    Comment by Pavel Macenauer — March 12, 2009 @ 4:48 pm

  6. I don’t think so that we should fear of military robots. They are controled by very responsible pepole.

    Comment by Michal Debre — March 14, 2009 @ 5:01 pm

  7. I doubt that the idea of military robots with sub-human intellect is even feasible. War as such is a rather chaotic process where rules are only formal and everything is allowed under certain circumstances. Therefore I think it’s impossible to create a “universal warrior code” that would cover all the situations that can arise during a conflict, because not even people have such complete codes of behaviour. Moreover, armies face problems even with human soldiers making wrong decisions on the battlefield, such as in distinguishing civilians from enemies, and for a robot to recognize whether a human has hostile intentions or not would require that it can understand human behaviour. But to understand it, the robot would need at least roughly the same level of intelligence. So I think that all the robots that will operate on battlefield will be remotely controled (as they are already) and what will happen once the AI equivalent to human brain arises is difficult to predict.

    Comment by Pavel Bibr — March 15, 2009 @ 9:38 am

  8. More frequent using robots in military is from my point of view contribution, because they may reduce casualties soldiers, but at the same time you carry its rock.
    Have to arrange to robots didn’t abuse his ability in fight, so that implementation destructive buttons is from my point of view necessary. Jugular robots often they may get to the seats where soldiers don’t get.
    Using these robots they must contribution, but there is a limit to everything…

    Comment by Marek Silber — March 22, 2009 @ 9:50 pm

  9. I think that everyone has some fear of something and time when robots will grow to the killer-machine level is not in the near future. I really believe in idea that future war can bring big steps in building non-human soldiers like technological growing in last war, but this is so abstract question that there are lot of “fears” closer to us and ours living.

    Comment by Václav Blahout — March 24, 2009 @ 12:16 pm

  10. I don´t think we should fear the killer robots. They are properly tested in many situations and conditions before being deployed. I think we should more fear the killer humans whos are realy unpredictable.

    Comment by Jiří Vršínský — March 25, 2009 @ 1:56 pm

  11. My opinion is that it will be necessary in the use of robots in a specific environment such as war, to focus on a thorough testing before deployment. I think it should be some supervision of control systems, control of robots.

    Comment by Jakub Rous — April 9, 2009 @ 6:59 am

  12. […] Should We Fear the Killer Robots?   […]

    Pingback by Changing the theme of this blog « Why Technical English — August 26, 2010 @ 9:28 am

  13. You certainly have some agreeable opinions and views. Your blog provides a fresh look at the subject.

    Comment by Oregege — November 18, 2010 @ 11:06 pm


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

    March 2009
    M T W T F S S
    « Feb   Apr »
     1
    2345678
    9101112131415
    16171819202122
    23242526272829
    3031  

    Blog Stats

    • 202,851 hits

    Subscribe with BlogLines

    Translatorsbase

    Dynamic blog-up

    technorati

    Join the discussion about

    Seomoz

    I <3 SEO moz
%d bloggers like this: