Showing posts with label robotics. Show all posts
Showing posts with label robotics. Show all posts

Saturday, October 15, 2011

worst news in robotics...ever

Okay, so this isn't Skynet, but this is seriously not good. It reveals not only the weaknesses in our information systems, but a bigger problem with risk assessment and management in U.S. defense policy.

The "little" problem: there is a virus that has infected the U.S. Predator drone program. The computers that control the drones have been infected, and no one knows quite what the virus does. One thing that officials know is that the virus is logging all the keystrokes made on those computers. That's a problem when you're talking about classified movements and operations.

The HUGE problem: U.S. defense officials are "concerned but not panicked" about a virus that they do not understand and have failed to eradicate. They continue to fly drones in the compromised program, raising questions about the military's priorities. If we could build a hypothetical mind controlling virus and infect U.S. soldiers with it, would the Army continue to deploy those soldiers? Of course not. And yet, while we don't know what's going on with the drones, and therefore cannot even estimate the risks at stake in deploying them, there has been no reduction in the program. This does not bode well for those of us who--ever since the Challenger blew up--have hoped that the U.S. government would improve its risk assessment practices. When we're talking about robotic planes that carry weapons, we need to be smarter than this.

Wednesday, August 10, 2011

science and the public

This past winter I wrote a paper about how more scientists involved with robotics and AI need to get involved with the public because, at present, most of them who are say little other than "the world will be magical" and wave their hands about and then pass a tin cup. While this definitely has some social value--it would be nice, for example, if Singularity University does more than simply create entrepreneurial opportunities and provide Ray Kurzweil with a check though I definitely think the jury is still out on the matter--I am skeptical that such voices are the only ones we need to hear. After all, robotics and AI pose serious dangers to individuals and societies.

As a big fan of both robotics and AI, I'm enthusiastic about continued research but I think we need to be a lot more cautious about how we deploy them, especially as they enhance certain aspects of military engagement (killing people from far away) and radically reshape public and private flows of information (through robotic spies or through AI algorithms).

The paper I wrote, "Martial Bliss: War and Peace in Popular Science Robotics," will come out in a special edition of Philosophy & Technology later this year. My basic hope is that roboticists will start engaging with the public, working toward a broad coalition that can guide our technological choices (which I absolutely refuse to believe are predetermined). We need to talk about what we, as a society, think is worth having and then work toward that.

In happy news, some other folks are picking up on the same issue. The New York Times has just published an essay about the need for scientists to speak to the public, which is apparently composed of people ignorant about things scientists actually say or even of who might actually be a living scientist. Of course, knowing what scientists believe might be a problem, as apparently learning new (and correct) information actually causes some people to harden their faith in falsity (anyone who's ever discussed global warming, tax policy, or evolution by natural selection with the U.S. Republican party faithful will already know this).