Thursday, March 30, 2017

Looking at the World with Human Eyes

A few weeks ago, I wrote a piece on 3 Quarks Daily arguing that thinking of realistic AI as being hyper-rational was a mistake, and that AI that is convincingly "real" will, in fact, be convincingly irrational - albeit not necessarily in ways that humans are.

This article reports on an attempt to develop a machine learning system for image analysis that makes mistakes similar to humans. This is much more than just a "cute" idea. As the report says, quoting David Cox, the study's lead researcher:

"Algorithms that make decisions in a similar way to us could also be easier to understand and trust, says Cox. Computer systems sometimes make mistakes that humans wouldn’t – like Tesla’s Autopilot system failing to notice a white trailer against a bright sky. Systems trained on brain data would make mistakes in a more human way. “And if you make mistakes that a human would make, humans will continue to trust that system,” says Cox."

Ultimately, the effort to make the irrationality of intelligent machines similar to that of humans will fail because machines capable of autonomous learning will go in unpredictable directions, but it isn't a bad place to start.

No comments:

Post a Comment