Monday, March 13, 2017

Artificial Stupidity

My new 3QD piece on the nature and future of AI:

"With exponential growth in computational power and availability of data, the unthinkable (literally, that which could not be thought) is now almost possible: Optimal – or near-optimal – choices can be calculated in real-time from vast amounts of data, even in some very complex tasks. And, thanks to the magic of machine learning, the mechanisms underlying these choices do not have to be specified by brain-limited humans; they can be inferred by the machines using the available data. So is AI finally going to give us the idealized rational agents of economists’ dreams? That is extremely doubtful! True, unlike the mechanisms of most human learning, the algorithms of machine learning are often based on rational objectives, but, like humans, machines must also learn from finite – albeit much larger – amounts of data. Thus, like humans, they too must fill in the gaps in data with heuristics – interpolating, extrapolating, simplifying, and generalizing just as humans do, but possibly in very different ways. And therein lies the rub! For now, machines try to learn something close to the human notion of rationality, which is already quite different from human thinking. But as intelligent machines progress to increasingly complex real-world problems and learn from increasingly complex data, the inferences they make will become less comprehensible, not more, because the complexity of the tasks will make the decision-making more opaque. And if machines are to become truly intelligent, they must become capable of learning rapidly like humans and other animals. But what they learn in that case will necessarily be even more biased by their priors and even less clearly interpretable to human observers – especially since many of these priors will themselves be acquired through learning."

No comments:

Post a Comment