Thursday, August 18, 2011

Machines Learning About the World: Time to Worry?

Recently, I saw this rather innocuous video of a robot torso (head, arms, etc.), filling a cup with fake water and ice. It had to learn that it must put down the water bottle before adding the ice. This actually highlights the fact that humans need more limbs, but I digress. The creepy part is that the machine, using its two "eyes" and internet-sized brain figured out the problem without any specific coding on what to do with only tow hands, It just figured it out. What!




So then I'm reeling with the thought that this machine, this collection of wires and alloy, is chaining together separate, but related, ideas to formulate a conclusion. Further, these are abstract ideas, not dealing with regular machine logic, that help the computer to understand and relate itself to the outside world. I felt a bit light-headed at this prospect of possible self-awareness.

Though, maybe I was being silly. Think about humans for a moment. We have motives based in a mix of logic, selfishness and emotion. We feel lauded and offended and we feel love and hate. How would, or why would, you program a machine to have these feelings and their affiliated motives? Well, you wouldn't, an emotional machine, like an emotional human, can be dangerously unpredictable. If a machine is instructed to complete a task, that is its motive. Learning is in support of satisfying that motive but a machine cannot glean any context on why an action is performed.

Given this lack of context we might consider these machines to only be helpful, question-free helpers. However, lack of context has its own set of dangers. Say a group of robots is tasked with improving the environment somewhere. Sounds like a good plan but when you look at the machines ability to gather data and develop the best conclusion you might see where this goes wrong. The machines could decide that humans are in the way of completing the task to it their best ability. The no-context decision would be to reduce human impact, or populations, to a manageable level. That would be more efficient than trying to find other conclusions that would mitigate the effects of humans. Choke the problem off at the source, right?

Okay, so one might add a line of code that says "whatever you come up with, killing humans is not acceptable." This cold, no-context kind of learning and decision making must be considered carefully because a lot of what humans engage in is illogical and nonsensical, especially to a machine enslaved to sound logic. This leads us down the path of thinking of all the ways not to solve something because killing humans is not the only decision people won't appreciate. This amounts to basically solving the problem ourselves and we are at square one again. What to do?

We could limit robot capabilities, but that seems counter-productive. We might provide a code of ethics that all self-aware machines must have hard-wired, but something would surely be overlooked or unintentionally circumvented. My solution? Big red OFF button on every unit.

No comments:

Post a Comment