Robot Soldiers and Ethical Issues
A New York Times "Taking Orders' article makes very good points about how a robotic soldier might arguably be more ethically dispassionate than an angry or fearful all-too-human soldier lashing out in the heat of battle, but the article then proceeds to blur the terminology by saying that making an ethical robot will raise MORAL issues.
Wait a minute, there IS a difference, that's why there are DIFFERENT WORDS. The soldier shooting anybody who approaches to save their buddies - or even their buddies dead bodies - from the 'bad guys' (as happened in Mogadishu) might be said to be very moral (even a 'hero') within their self-justified home team values based frame of reference, but they are not being ethical.
The article says this well, and it quotes a research reference that is quite dramatic. But as long as folks keep mushing the terminology by using 'ethical', 'moral', and 'values' in the same sentence as though they are synonyms, it's going to remain impossible to get past the quite predicatable conclusion that every difficult contention (with or without robots) must necessarily degrade into a 'moral dilemma' of different self-serving values butting heads.
Unfortunately that values-constrained level of problem analysis often results in concluding there is no fair answer, just different values, pick your side, and ends up not really examining the ethics of conflict resolution that follow unbiased principles and multilateral processes (with or without force). Yikes! We can do better than '24', we don't need more self-justified judge-jury-torturer-executioners.
I wonder what Noam Chomsky would say about this? Is a linguistic problem with using 'morals' and 'ethics' interchangably, contributing to a major social problem with defining means for principled conflict resolution - with or without robots?