I'm concerned about the robots that are "just following orders". For example, what will an autonomous car do when its brakes fail and it needs to decide in real time whether to run over a pedestrian or throw itself and its passengers into a ditch? Will it assign a numerical value to the lives of its passengers vs the life of the pedestrian and choose to sacrifice the party of lesser value? How would it do that?I am supposing there can be 'good robots' and 'evil robots' depending on their programmer, and 'good robots' can be corrupted, while 'evil robots' can be saved and redeemed.
It's a real-life trolley problem and it's coming your way soon.