A human will always decide when a robot kills you: Pentagon reassurance
#1
http://www.dailymail.co.uk/sciencetech/a...lypse.html

Excerpt: "The U.S. military has made clear that any future robot weapons systems will always need manual authorisation before opening fire on human targets.

The Department of Defense issued a new policy directive saying that any semi-autonomous weapons systems will be designed so they need human authorisation to open fire.

The promise comes after a Human Rights Watch report called for an international ban on 'killer robots', which the group warned could be deployed within 20 years.

Soon after that report was published, Deputy Defense Secretary Ashton Carter signed a series of instructions 'to minimise failures that could lead to unintended engagements or to loss of control' of armed robots.

Policy directive 3000.09 says: 'Semi-autonomous weapon systems that are onboard or integrated with unmanned platforms must be designed such that, in the event of degraded or lost communications, the system does not autonomously select and engage individual targets or specific target groups that have not been previously selected by an authorised human operator.'

In order to make sure this is the case, the Pentagon asks that the hardware and software controlling robot weapons comes equipped with 'safeties, anti-tamper mechanisms, and information assurance'. They must also be designed to have proper 'human-machine interfaces and controls'.

Above all, they must 'allow commanders and operators to exercise appropriate levels of human judgement over the use of force'.

The Pentagon's promise comes after a joint Human Rights Watch and Harvard Law School report raised the alarm over the ethics of allowing robots to take decisions as to when to open fire on humans.

Although no U.S. drone is yet able to pull the trigger without a human operator's direction, the report warned that militaries worldwide are 'very excited' about machines that could one day be deployed alone in battle.

There are already precedents for such machines. Raytheon's Phalanx gun system, already deployed on U.S. Navy ships, can search for enemy fire and destroy incoming projectiles by itself.

The Northrop Grumman X47B is a plane-sized drone able to take off and land on aircraft carriers, carry out air combat without a pilot and even refuel in the air.

And in South Korea, Samsung have developed an automated machine gun sentry robot that is able to spot unusual activity, challenge intruders and, when authorised by a human, aim at targets and open fire.

Faced with such worrying developments, the Losing Humanity report called for 'an international treaty that would absolutely prohibit the development, production, and use of fully autonomous weapons.'

Steve Goose, arms division director at Human Rights Watch, added: 'A number of governments, including the United States, are very excited about moving in this direction, very excited about taking the soldier off the battlefield and putting machines on the battlefield and thereby lowering casualties.'

While Mr Goose admitted such 'killer robots' do not exist as yet, he warned of precursors and added that the best way to forestall an ethical nightmare is a 'preemptive, comprehensive prohibition on the development or production of these systems.'

The Pentagon's latest promise could offer some relief for those worried about the possibility of automating warfare, with the new policy mandating a team of senior officials to certify that any weapon meets stringent conditions before agreeing to its development or purchase.

However, the directive does leave the way completely open for increased autonomy in a range of military robots that aren't intended as killing machines.

It '[d]oes not apply to autonomous or semi-autonomous cyberspace systems for cyberspace operations; unarmed, unmanned platforms; unguided munitions; munitions manually guided by the operator (e.g., laser- or wire-guided munitions); mines; or unexploded explosive ordnance,' the policy says.

That means the Pentagon does not need to apply similar safeguards when developing computer viruses, bugs or surveillance drones.
As Wired puts it: 'While everyone’s worried about preventing the Rise of the Machines, the machines are getting a pass to spy on you, under their own power."
Reply
#2
Thank God, I'd hate to be murdered by a computer clinch.
Reply
#3
I wonder if or how long it will be until the 'human authorization' (which is explained as an active intervention) is changed to a passive intervention. They won't shoot if a human observer presses the 'don't kill people' button.
Reply
#4
Pentagon reassurance, OH that makes me feel real safe.
Reply


Forum Jump:


Users browsing this thread: 1 Guest(s)