Everyone here must know that machines have not the slightest notion about whether humans are harmed or not. But supposing such things were possible, would Asimov's Three Laws be a sensible way to control them?
I'll use a 'Whitebox' to avoid spoilers for those who've not yet seen the film and might eventually watch it on television.
For those who don't know Whiteboxes, you highlight them to see what's written. To write your own, you say code and then white, both in square brackets. then /white and /code to end it.
------------------
A view from the UK
I'll use a 'Whitebox' to avoid spoilers for those who've not yet seen the film and might eventually watch it on television.
Code:
[white]Obviously, one major flaw is shown up by Viki's understanding of First Law. She may harm humans in the belief that she is preventing other humans from coming to harm.[/white]
------------------
A view from the UK