The Atlantic has a long but engrossing piece on the impact of military and intelligence robotics on the ethics of combat.
To be fair, it goes way beyond just robots and also discusses implants, digital enhancements and cybernetics. And if it sounds a bit science-fiction, it’s looking at already available or just-over-the-horizon technology and sticks with hard-nosed implications.
One more human weak-link is that robots may likely have better situational awareness, if they’re outfitted with sensors that can let them see in the dark, through walls, networked with other computers, and so on. This raises the following problem: Could a robot ever refuse a human order, if it knows better?For instance, if a human orders a robot to shoot a target or destroy a safehouse, but it turns out that the robot identifies the target as a child or a safehouse full of noncombatants, could it refuse that order?
Does having the technical ability to collect better intelligence before we conduct a strike obligate us to do everything we can to collect that data? That is, would we be liable for not knowing things that we might have known by deploying intelligence-gathering robots?
It’s a long-read but well worth it as the piece looks at the impact of cutting-edge war technology on everything from humanitarian law to winning the hearts and minds of the local population.
Link to The Atlantic ‘Drone-Ethics Briefing’.
No comments:
Post a Comment