Results 1 to 11 of 11

Thread: I for one welcome our militant robot overlords...

  1. #1
    Swarthylicious Member Spino's Avatar
    Join Date
    Sep 2002
    Location
    Brooklyn, New York
    Posts
    2,604

    Default I for one welcome our militant robot overlords...

    $5 says Al Gore is revealed to be one of the Final Five...

    http://www.alertnet.org/thenews/newsdesk/LM674603.htm

    Spoiler Alert, click show to read: 
    COLUMN-Killer robots and a revolution in warfare:Bernd Debusmann 22 Apr 2009 14:04:33 GMT
    Source: Reuters
    (Bernd Debusmann is a Reuters columnist. The opinions expressed are his own)

    By Bernd Debusmann

    WASHINGTON, April 22 (Reuters) - They have no fear, they never tire, they are not upset when the soldier next to them gets blown to pieces. Their morale doesn't suffer by having to do, again and again, the jobs known in the military as the Three Ds - dull, dirty and dangerous.

    They are military robots and their rapidly increasing numbers and growing sophistication may herald the end of thousands of years of human monopoly on fighting war. "Science fiction is moving to the battlefield. The future is upon us," as Brookings scholar Peter Singer put it to a conference of experts at the U.S. Army War College in Pennsylvania this month.

    Singer just published Wired For War - the Robotics Revolution and Conflict in the 21st Century, a book that traces the rise of the machines and predicts that in future wars they will not only play greater roles in executing missions but also in planning them.

    Numbers reflect the explosive growth of robotic systems. The U.S. forces that stormed into Iraq in 2003 had no robots on the ground. There were none in Afghanistan either. Now those two wars are fought with the help of an estimated 12,000 ground-based robots and 7,000 unmanned aerial vehicles (UAVs), the technical term for drone, or robotic aircraft.

    Ground-based robots in Iraq have saved hundreds of lives in Iraq, defusing improvised explosive devices, which account for more than 40 percent of U.S. casualties. The first armed robot was deployed in Iraq in 2007 and it is as lethal as its acronym is long: Special Weapons Observation Remote Reconnaissance Direct Action System (SWORDS). Its mounted M249 machinegun can hit a target more than 3,000 feet away with pin-point precision.

    From the air, the best-known UAV, the Predator, has killed dozens of insurgent leaders - as well as scores of civilians whose death has prompted protests both from Afghanistan and Pakistan.

    The Predators are flown by operators sitting in front of television monitors in cubicles at Creech Air Force Base in Nevada, 8,000 miles from Afghanistan and Taliban sanctuaries on the Pakistani side of the border with Afghanistan. The cubicle pilots in Nevada run no physical risks whatever, a novelty for men engaged in war.

    TECHNOLOGY RUNS AHEAD OF ETHICS

    Reducing risk, and casualties, is at the heart of the drive for more and better robots. Ultimately, that means "fully autonomous engagement without human intervention," according to an Army communication to robot designers. In other words, computer programs, not a remote human operator, would decide when to open fire. What worries some experts is that technology is running ahead of deliberations of ethical and legal questions.

    Robotics research and development in the U.S. received a big push from Congress in 2001, when it set two ambitious goals: by 2010, a third of the country's long-range attack aircraft should be unmanned; and by 2015 one third of America's ground combat vehicles. Neither goal is likely to be met but the deadline pushed non-technological considerations to the sidelines.

    A recent study prepared for the Office of Naval Research by a team from the California Polytechnic State University said that robot ethics had not received the attention it deserved because of a "rush to market" mentality and the "common misconception" that robots will do only what they have been programmed to do.

    "Unfortunately, such a belief is sorely outdated, harking back to the time when computers were simpler and their programs could be written and understood by a single person," the study says. "Now programs with millions of lines of code are written by teams of programmers, none of whom knows the entire program; hence, no individual can predict the effect of a given command with absolute certainty since portions of programs may interact in unexpected, untested ways."

    That's what might have happened during an exercise in South Africa in 2007, when a robot anti-aircraft gun sprayed hundreds of rounds of cannon shell around its position, killing nine soldiers and injuring 14.

    Beyond isolated accidents, there are deeper problems that have yet to be solved. How do you get a robot to tell an insurgent from an innocent? Can you program the Laws of War and the Rules of Engagement into a robot? Can you imbue a robot with his country's culture? If something goes wrong, resulting in the death of civilians, who will be held responsible?

    The robot's manufacturer? The designers? Software programmers? The commanding officer in whose unit the robot operates? Or the U.S. president who in some cases authorises attacks? (Barack Obama has given the green light to a string of Predator strikes into Pakistan).

    While the United States has deployed more military robots - on land, in the air and at sea - than any other country, it is not alone in building them. More than 40 countries, including potential adversaries such as China, are working on robotics technology. Which leaves one to wonder how the ability to send large numbers of robots, and fewer soldiers, to war will affect political decisions on force versus diplomacy.

    You need to be an optimist to think that political leaders will opt for negotiation over war once combat casualties come home not in flag-decked coffins but in packing crates destined for the robot repair shop. (You can contact the author at Debusmann@Reuters.com) (Editing by Sean Maguire)

    T-800 unavailable for comment....

    Spoiler Alert, click show to read: 


    Ok, so it's just another fluffy alarmist piece designed to stir the pot and increase the hit count. But it's enough to merit an intelligent debate here on the Org.

    What are the ethical dilemmas we face with the use of robots and/or advanced AI? Is removing humans from the act of killing or from the decision making process involved in terminating another life acceptable? Is there a fuzzy area of ethics to address when AI becomes more 'lifelike' or 'human'? Would people care what we do to robots programmed with advanced AI so long as they don't resemble anything remotely human or animal-like? Should we be wary of the development of human-like robots or AI?

    Personally I view a robot or AI as being nothing more than an advanced tool designed to perform a specific task and make our lives easier. However I'd feel alot more comfortable if robotics researchers and engineers would stop making the damn things look, move and behave so damn human (as far as pure utility is concerned I can think of far superior species to emulate when it comes to movement, heavy lifting, etc.). This obsession with creating artificial life molded in our image just smacks of subconscious fueled narcissism.

    Ages ago I got into a discussion with a friend where I asserted that the risks posed by short sighted endeavors in the fields of robotics and AI were too great and that humanity would be better off researching how to improve our genetic condition and the overall intelligence and survivability of the species, Brave New World style caste dangers be damned. Needless to say I was more than pleased to learn that Stephen Hawking made a statement a few years ago urging humanity to work on improving itself via eugenics and abandon its obsession with robotics & AI research that could pave the way for our demise/extinction or loss of humanity (patted myself on the back so much that it left a red mark that day). Are we spending too much time improving our tools and not our genes? Should we be more concerned with creating the superman instead of the supermatic?

    Ubermensch or Cylon? You make the call!
    "Why spoil the beauty of the thing with legality?" - Theodore Roosevelt

    Idealism is masturbation, but unlike real masturbation idealism actually makes one blind. - Fragony

    Though Adrian did a brilliant job of defending the great man that is Hugo Chavez, I decided to post this anyway.. - JAG (who else?)

  2. #2
    Member Member Alexander the Pretty Good's Avatar
    Join Date
    Jun 2004
    Location
    New Jersey, USA
    Posts
    4,979

    Default Re: I for one welcome our militant robot overlords...

    Perhaps we can use robots to fight the eugenicists?

  3. #3
    Enlightened Despot Member Vladimir's Avatar
    Join Date
    Aug 2005
    Location
    In ur nun, causing a bloody schism!
    Posts
    7,906

    Default Re: I for one welcome our militant robot overlords...

    Cylon. Those bots are H-O-T!

    Yea, I'm more an advocate of bioengineering over robots.


    Reinvent the British and you get a global finance center, edible food and better service. Reinvent the French and you may just get more Germans.
    Quote Originally Posted by Evil_Maniac From Mars
    How do you motivate your employees? Waterboarding, of course.
    Ik hou van ferme grieten en dikke pinten
    Down with dried flowers!
    Spoiler Alert, click show to read: 



  4. #4
    Member Member Kongamato's Avatar
    Join Date
    Jul 2002
    Location
    East Lansing, Michigan, USA
    Posts
    1,983

    Default Re: I for one welcome our militant robot overlords...

    I've checked the headlines on Reuters for a few years now, and I can't remember a single time Bernd Debusmann wrote anything positive about the US. Every couple of weeks it's some new complaint or disturbing forecast.
    "Never in physical action had I discovered the chilling satisfaction of words. Never in words had I experienced the hot darkness of action. Somewhere there must be a higher principle which reconciles art and action. That principle, it occurred to me, was death." -Yukio Mishima

  5. #5
    Needs more flowers Moderator drone's Avatar
    Join Date
    Dec 2004
    Location
    Moral High Grounds
    Posts
    9,278

    Default Re: I for one welcome our militant robot overlords...

    We are legion!
    The .Org's MTW Reference Guide Wiki - now taking comments, corrections, suggestions, and submissions

    If I werent playing games Id be killing small animals at a higher rate than I am now - SFTS
    Si je n'étais pas jouer à des jeux que je serais mort de petits animaux à un taux plus élevé que je suis maintenant - Louis VI The Fat

    "Why do you hate the extremely limited Spartan version of freedom?" - Lemur

  6. #6
    master of the pwniverse Member Fragony's Avatar
    Join Date
    Apr 2003
    Location
    The EUSSR
    Posts
    30,680

    Default Re: I for one welcome our militant robot overlords...


  7. #7

    Default Re: I for one welcome our militant robot overlords...

    Quote Originally Posted by Fragony View Post
    Win.


  8. #8
    Senior Member Senior Member English assassin's Avatar
    Join Date
    Mar 2004
    Location
    London, innit
    Posts
    3,734

    Default Re: I for one welcome our militant robot overlords...

    Personally I view a robot or AI as being nothing more than an advanced tool designed to perform a specific task and make our lives easier.
    Which is great when the task is killing people.

    I did a thread on this a while back. People were quite cool with terminators.

    2050: robot warriors of the west against billions of brown people from everywhere else fighting for the remaining bits of habitable land*. You read it here first.

    I'll be dead anyway, and I'll be encouraging my children and grandchildren to join the army. All present evidence to the contrary not withstanding, I reckon the government will have to look after the armed forces.


    (* I don't condone this. I just look at the demographics and the way the climate is going and think this is a likely outcome)
    "The only thing I've gotten out of this thread is that Navaros is claiming that Satan gave Man meat. Awesome." Gorebag

  9. #9
    Headless Senior Member Pannonian's Avatar
    Join Date
    Apr 2005
    Posts
    7,978

    Default Re: I for one welcome our militant robot overlords...

    May I ask what militant robots are? Are they robots who refuse to work unless their working conditions are improved?

  10. #10
    Swarthylicious Member Spino's Avatar
    Join Date
    Sep 2002
    Location
    Brooklyn, New York
    Posts
    2,604

    Default Re: I for one welcome our militant robot overlords...

    Quote Originally Posted by English assassin View Post
    Which is great when the task is killing people.

    I did a thread on this a while back. People were quite cool with terminators.

    2050: robot warriors of the west against billions of brown people from everywhere else fighting for the remaining bits of habitable land*. You read it here first.

    I'll be dead anyway, and I'll be encouraging my children and grandchildren to join the army. All present evidence to the contrary not withstanding, I reckon the government will have to look after the armed forces.

    (* I don't condone this. I just look at the demographics and the way the climate is going and think this is a likely outcome)
    But... what if the yellow people of the east make robot warriors that are more better and build them more faster than the west?!? Will our robotz haz enuff tuff to beat theirz?

    Last I checked Japan is light years ahead of everyone else in the mecha race.
    Last edited by Spino; 04-23-2009 at 22:18.
    "Why spoil the beauty of the thing with legality?" - Theodore Roosevelt

    Idealism is masturbation, but unlike real masturbation idealism actually makes one blind. - Fragony

    Though Adrian did a brilliant job of defending the great man that is Hugo Chavez, I decided to post this anyway.. - JAG (who else?)

  11. #11
    Nobody expects the Senior Member Lemur's Avatar
    Join Date
    Jan 2004
    Location
    Wisconsin Death Trip
    Posts
    15,754

    Default Re: I for one welcome our militant robot overlords...

    Quote Originally Posted by Spino View Post
    Last I checked Japan is light years ahead of everyone else in the mecha race.
    Heh, completely untrue, but I think you know that already. Anybody seriously interested in this subject needs to go down to the library and check out Wired for War, the first comprehensive book on the subject. An excellent passage that the author posted online:

    Despite all the enthusiasm in military circles for the next generation of unmanned vehicles, ships, and planes, there is one question that people are generally reluctant to talk about. It is the equivalent of Lord Voldemort in Harry Potter, The Issue That Must Not Be Discussed. What happens to the human role in war as we arm ever more intelligent, more capable, and more autonomous robots?

    When this issue comes up, both specialists and military folks tend to change the subject or speak in absolutes. “People will always want humans in the loop,” says Eliot Cohen, a noted military expert at Johns Hopkins who served in the State Department under President George W. Bush. An Air Force captain similarly writes in his service’s professional journal, “In some cases, the potential exists to remove the man from harm’s way. Does this mean there will no longer be a man in the loop? No. Does this mean that brave men and women will no longer face death in combat? No. There will always be a need for the intrepid souls to fling their bodies across the sky.”

    All the rhetoric ignores the reality that humans started moving out of “the loop” a long time before robots made their way onto battlefields. As far back as World War II, the Norden bombsight made calculations of height, speed, and trajectory too complex for a human alone when it came to deciding when to drop a bomb. By the Persian Gulf War, Captain Doug Fries, a radar navigator, could write this description of what it was like to bomb Iraq from his B-52: “The navigation computer opened the bomb bay doors and dropped the weapons into the dark.”

    In the Navy, the trend toward computer autonomy has been in place since the Aegis computer system was introduced in the 1980s. Designed to defend Navy ships against missile and plane attacks, the system operates in four modes, from “semi-automatic,” in which humans work with the system to judge when and at what to shoot, to “casualty,” in which the system operates as if all the humans are dead and does what it calculates is best to keep the ship from being hit. Humans can override the Aegis system in any of its modes, but experience shows that this capability is often beside the point, since people hesitate to use this power. Sometimes the consequences are tragic.

    The most dramatic instance of a failure to override occurred in the Persian Gulf on July 3, 1988, during a patrol mission of the U.S.S. Vincennes. The ship had been nicknamed “Robo-#cruiser,” both because of the new Aegis radar system it was carrying and because its captain had a reputation for being overly aggressive. That day, the Vincennes’s radars spotted Iran Air Flight 655, an Airbus passenger jet. The jet was on a consistent course and speed and was broadcasting a radar and radio signal that showed it to be civilian. The automated Aegis system, though, had been designed for managing battles against attacking Soviet bombers in the open North Atlantic, not for dealing with skies crowded with civilian aircraft like those over the gulf. The computer system registered the plane with an icon on the screen that made it appear to be an Iranian F-14 fighter (a plane half the size), and hence an “assumed enemy.”

    Though the hard data were telling the human crew that the plane wasn’t a fighter jet, they trusted the computer more. Aegis was in semi-automatic mode, giving it the least amount of autonomy, but not one of the 18 sailors and officers in the command crew challenged the computer’s wisdom. They authorized it to fire. (That they even had the authority to do so without seeking permission from more senior officers in the fleet, as their counterparts on any other ship would have had to do, was itself a product of the fact that the Navy had greater confidence in Aegis than in a human-crewed ship without it.) Only after the fact did the crew members realize that they had accidentally shot down an airliner, killing all 290 passengers and crew, including 66 children.

    The tragedy of Flight 655 was no isolated incident. Indeed, much the same scenario was repeated a few years ago, when U.S. Patriot missile batteries accidentally shot down two allied planes during the Iraq invasion of 2003. The Patriot systems classified the craft as Iraqi rockets. There were only a few seconds to make a decision. So machine judgment trumped any human decisions. In both of these cases, the human power “in the loop” was actually only veto power, and even that was a power that military personnel were unwilling to use against the quicker (and what they viewed as superior) judgment of a computer.

    The point is not that the machines are taking over, Matrix-style, but that what it means to have humans “in the loop” of decision making in war is being redefined, with the authority and autonomy of machines expanding. There are myriad pressures to give war-bots greater and greater autonomy. The first is simply the push to make more capable and more intelligent robots. But as psychologist and artificial intelligence expert Robert Epstein notes, this comes with a built-in paradox. “The irony is that the military will want [a robot] to be able to learn, react, etc., in order for it to do its mission well. But they won’t want it to be too creative, just like with soldiers. But once you reach a space where it is really capable, how do you limit them? To be honest, I don’t think we can.”

    Simple military expediency also widens the loop. To achieve any sort of personnel savings from using unmanned systems, one human operator has to be able to “supervise” (as opposed to control) a larger number of robots. For example, the Army’s long-term Future Combat Systems plan calls for two humans to sit at identical consoles and jointly supervise a team of 10 land robots. In this scenario, the humans delegate tasks to increasingly autonomous robots, but the robots still need human permission to fire weapons. There are many reasons, however, to believe that this arrangement will not prove workable.

    Researchers are finding that humans have a hard time controlling multiple units at once (imagine playing five different video games simultaneously). Even having human operators control two UAVs at a time rather than one reduces performance levels by an average of 50 percent. As a NATO study concluded, the goal of having one operator control multiple vehicles is “currently, at best, very ambitious, and, at worst, improbable to achieve.” And this is with systems that aren’t shooting or being shot at. As one Pentagon-funded report noted, “Even if the tactical commander is aware of the location of all his units, the combat is so fluid and fast paced that it is very difficult to control them.” So a push is made to give more autonomy to the machine.

    And then there is the fact that an enemy is involved. If the robots aren’t going to fire unless a remote operator authorizes them to, then a foe need only disrupt that communication. Military officers counter that, while they don’t like the idea of taking humans out of the loop, there has to be an exception, a backup plan for when communications are cut and the robot is “fighting blind.” So another exception is made.

    Even if the communications link is not broken, there are combat situations in which there is not enough time for the human operator to react, even if the enemy is not functioning at digital speed. For instance, a number of robot makers have added “counter-sniper” capabilities to their machines, enabling them to automatically track down and target with a laser beam any enemy that shoots. But those precious seconds while the human decides whether to fire back could let the enemy get away. As one U.S. military officer observes, there is nothing technical to prevent one from rigging the machine to shoot something more lethal than light. “If you can automatically hit it with a laser range finder, you can hit it with a bullet.”

    This creates a powerful argument for another exception to the rule that humans must always be “in the loop,” that is, giving robots the ability to fire back on their own. This kind of autonomy is generally seen as more palatable than other types. “People tend to feel a little bit differently about the counterpunch than the punch,” Noah Shachtman notes. As Gordon Johnson of the Army’s Joint Forces Command explains, such autonomy soon comes to be viewed as not only logical but quite attractive. “Anyone who would shoot at our forces would die. Before he can drop that weapon and run, he’s probably already dead. Well now, these cowards in Baghdad would have to pay with blood and guts every time they shot at one of our folks. The costs of poker went up significantly. The enemy, are they going to give up blood and guts to kill machines? I’m guessing not.”

    Each exception, however, pushes one further and further from the absolute of “never” and instead down a slippery slope. And at each step, once robots “establish a track record of reliability in finding the right targets and employing weapons properly,” says John Tirpak, executive editor of Air Force Magazine, the “machines will be trusted.”

    The reality is that the human location “in the loop” is already becoming, as retired Army colonel Thomas Adams notes, that of “a supervisor who serves in a fail-safe capacity in the event of a system malfunction.” Even then, he thinks that the speed, confusion, and information overload of modern-day war will soon move the whole process outside “human space.” He describes how the coming weapons “will be too fast, too small, too numerous, and will create an environment too complex for humans to direct.” As Adams concludes, the new technologies “are rapidly taking us to a place where we may not want to go, but probably are unable to avoid.”

    The irony is that for all the claims by military, political, and scientific leaders that “humans will always be in the loop,” as far back as 2004 the U.S. Army was carrying out research that demonstrated the merits of armed ground robots equipped with a “quick-draw response.” Similarly, a 2006 study by the Defense Safety Working Group, in the Office of the Secretary of Defense, discussed how the concerns over potential killer robots could be allayed by giving “armed autonomous systems” permission to “shoot to destroy hostile weapons systems but not suspected combatants.” That is, they could shoot at tanks and jeeps, just not the people in them. Perhaps most telling is a report that the Joint Forces Command drew up in 2005, which suggested that autonomous robots on the battlefield would be the norm within 20 years. Its title is somewhat amusing, given the official line one usually hears: Unmanned Effects: Taking the Human Out of the Loop.

    So, despite what one article called “all the lip service paid to keeping a human in the loop,” autonomous armed robots are coming to war. They simply make too much sense to the people who matter.
    Last edited by Lemur; 04-24-2009 at 04:13.

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
Single Sign On provided by vBSSO