The story appears on

Page A7

April 8, 2016

GET this page in PDF

Free for subscribers

View shopping cart

Related News

Home » Opinion » Book review

Book shows how little we still know about machine learning

LAST month, when the self-learning algorithm AlphaGo triumphed over South Korean Go master Lee Se-Dol, it made sensational news. Some even saw the victory as a step toward humanity’s eventual enslavement by robots. Naturally, such alarmist views were often accompanied by calls to rein in these “thinking” machines before it’s too late.

There were also more optimistic predictions that “smarter” machines might be pressed into the service of human beings. This could be a boon in China’s own rapidly aging society. Such sentiments are, as a rule, inflamed by reporters who lack the sophistication to properly explain the situation. This inadequacy (as well as other motives) inclines them to interpret the result of the Go game as nothing short of a testament to the power of the machines.

Those looking for more somber analysis of strides being made in artificial intelligence and machine learning should look to Pedro Domingos’ “The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World.”

Domingos is a professor of computer science at the University of Washington and a fellow of the Association for the Advancement of Artificial Intelligence.

As Domingoes explains, algorithms are “precise and unambiguous” instructions that tell computers exactly “what to do.”

As writing algorithms is a time-consuming and frequently counter-intuitive process, programmers will often build on each other’s work. This has led to the relatively rapid development of an entire “ecosystem” of increasingly sophisticated computer instructions.

A revolutionary concept in machine learning today is that computers may be capable of writing their own algorithms. This would require the existence of a “Master Algorithm.” Based on developments in big data, physics, biology, neural science and statistics, there are indications that a Master Algorithm is possible, according to Domingos. But a machine learner can beat a human in one aspect only: speed. A scientist may hope to discard or revise maybe a few hundred hypotheses in a lifetime. But a machine learner can check a hypothesis in less than a second.

There are five competing schools of thought about machine intelligence, writes Domingos.

The symbolists reduce intelligence to symbol manipulation on the belief that learning can’t start from scratch — hence the inclusion of pre-existing knowledge into their model.

Symbolists use inverse deduction as their Master Algorithm. Inverse deduction determines what constitutes knowledge through a process of deduction and then generalizes from the result.

This theory dates back to philosopher David Hume, one of the great empiricists and the patron saint of the symbolists. Hume asked a profound question: How can you generalize from what you’ve observed to what you haven’t experienced?

All learning algorithms seek to find a solution to this query.

Physicist David Wolpert’s answer is the “no free lunch theorem,” which presupposes priming the pump of knowledge creation by using prior knowledge, plus random chance.

While studying brain functioning in the 1940s, scientist Donald Hebb found that repeated activity in one neuron sparks activity in nearby neurons — a principle often summarized as “neurons that fire together wire together.” Connectionists use algorithms to “simulate” activity observed in the brain.

Computers can compensate for a lack of connections (compared with the brain) with speed.

For instance, while an organic brain might use 1,000 individual neurons to perform a particular task, a computer might use the same wire a thousand times to achieve similar results.

Seeing natural selection as the engine of learning, evolutionaries view “genetic programming” as the basis of their Master Algorithm: They “evolve” computer programs in much the same way that organisms evolve in nature.

Evolutionaries look for “genetic algorithms” which work based on “a fitness function” — a score given to programs according to how well they accomplish what designers created them to do.

Genetic algorithms also use something like sexual reproduction to “mate” the fittest programs and produce “offspring” that contain slightly different qualities.

Reverend Thomas Bayes (1701-1761) created an equation for incorporating new evidence into existing beliefs by recognizing the inherent uncertainty and incompleteness of all knowledge. His disciples, therefore, see learning as “a form of uncertain inference.”

In other words, understanding naturally is flawed and partial. Their challenge is how to separate data from the surrounding noise and build systems that can deal with incompleteness. Their Master Algorithm is based on Bayes’ theorem and its derivatives. Bayes’ theorem also says that the strength of one’s belief in a specific hypotheis should be revised when one discovers new data. Bayesians see learning as a specialized use of this theorem.

Analogizers see “recognizing similarities” as central to learning. Their challenge is to determine just how alike the two objects are by using “the support vector machine” — a name given to their Master Algorithm. While “neural networks” played a larger role in the early years of machine learning, analogy offers exciting possibilities for this Master Algorithm.

The Master Algorithm will be the necessary unifier of the existing models without embracing any of their weaknesses, according to Domingos.

Creating this would require “meta-learning” — learning about learning or learners — which would also require running and combining multiple models.

To combine different learners quickly, you might run the learners and tally their results.

A world dominated by smarter learning machines, while it might free us from some drudgery, would not be an unmixed blessing.

Each time you use a computer, you are teaching the computer more about you. This encroachment upon your privacy could be risky, whether your are at work or at home. But however capable a Master Algorithm might prove to be, it cannot but be dependent on human beings.

“Machine learning is a kind of knowledge pump: We can use it to extract a lot of knowledge from data, but first we have to prime the pump,” as Domingos points out.

Rather than be frightened by a robot apocalypse, it is more worthwhile to be worried about some people’s effective enslavement by those social tools that promised to empower them by making them more connected.




 

Copyright © 1999- Shanghai Daily. All rights reserved.Preferably viewed with Internet Explorer 8 or newer browsers.

沪公网安备 31010602000204号

Email this to your friend