Here is another technical text about robots, which is presentable for studying. At the same time the text points at serious issues connected with military robots.
Robots have been used in laboratories and factories for many years, but their uses are changing fast. Since the turn of the century, sales of professional and personal service robots have risen sharply and make total 5.5 million in 2008. IFR Statistics estimate 11.5 million in the next two years. The price of robot manufacture is also falling. In 2006 robots cost by 80% cheaper than it was in 1990. Nowadays the robots are entering our lives in unprecedented numbers.
Robots in the military are no longer the stuff of science fiction. They have left the movie screen and entered the battlefield. Washington University in St. Louis’s Doug Few and Bill Smart are on the cutting edge of this new wave of technology. Few and Smart report that the military goal is to have approximately 30% of the Army comprised of robotic forces by approximately 2020.
Nowadays the U.S. military has already deployed about 5 thousands of robot systems in Iraq and Afghanistan.
Autonomous military robots that will fight future wars must be programmed to live by a strict warrior code or the world risks untold atrocities at their steely hands.
The stark warning is issued in a hefty report funded by and prepared for the US Navy’s high-tech and secretive Office of Naval Research . The report also includes a discussion of a Terminator–style scenario in which robots turn on their human masters. In fact the report is the first serious work of its kind on military robot ethics. It envisages a fast-approaching epoch where robots are smart enough to make battlefield decisions that are at present the preserve of humans.
“There is a common misconception that robots will do only what we have programmed them to do,” Dr Patrick Lin, the chief compiler of the report, said. “Unfortunately, such a belief is sorely outdated, harking back to a time when programs could be written and understood by a single person.” The reality, Dr Lin says, is that modern programs include millions of lines of code and are written by teams of programmers, none of whom knows the entire program. Therefore, no individual could accurately predict how the various portions of large programs would interact. Without extensive testing in the field the “right” behaviour of fighting robots can´t be guaranteed. The solution, he suggests, is to mix rules-based programming with a period of “learning” the rights and wrongs of warfare.
If a robot goes berserk in a crowd of civilians – the robot, its programmer or the US president? Should the robots have a “suicide switch” or should they be programmed to preserve their lives?
The report, compiled by the Ethics and Emerging Technology department of California State Polytechnic University, strongly warns the US military against complacency or shortcuts as military robot designers engage in the “rush to market” and the pace of advances in artificial intelligence is increased.
A sense of haste among designers may have been heightened by a US congressional mandate that by 2010 a third of all operational “deep-strike” aircraft must be unmanned, and that by 2015 one third of all ground combat vehicles must be unmanned.
“A rush to market increases the risk for inadequate design or programming. Worse, without a sustained and significant effort to build in ethical controls in autonomous systems there is little hope that the early generations of such systems and robots will be adequate, making mistakes that may cost human lives,” the report noted.
A simple ethical code along the lines of the “Three Laws of Robotics” postulated by Isaac Asimov, the science fiction writer, will not be sufficient to ensure the ethical behaviour of autonomous military machines.
“We are going to need a code,” Dr Lin said. “These things are military, and they can’t be pacifists, so we have to think in terms of battlefield ethics. We are going to need a warrior code.”
Isaac Asimov’s three laws of robotics
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
They were introduced in Asimov’s short story Runaround (1942 or 1950)
Subscribe with BlogLines
Pages of the blog