SlideShare a Scribd company logo
SELF-TAUGHT
Gary H Anthes. Computerworld. Framingham: Feb 6, 2006.Vol.40, Iss. 6; pg. 28, 1 pgs
Copyright Computerworld Inc. Feb 6, 2006
[Headnote]
Machine-learning techniques have been used to create self-improving software for
decades, but recent advances are bringing these tools into the mainstream. BY GARY H.
ANTHES
FUTURE WATCH




[Photograph]
Stanford professor SEBASTIAN THRUN with "Stanley," the car that used machine-
learning techniques to drive itself 132 miles across the desert.



ATTEMPTS TO create self-improving software date to the 1960s. But "machine
learning," as it's often called, has remained mostly the province of academic researchers,
with only a few niche applications in the commercial world, such as speech recognition
and credit card fraud detection. Now, researchers say, better algorithms, more powerful
computers and a few clever tricks will move it further into the mainstream.

And as the technology grows, so does the need for it. "In the past, someone would look at
a problem, write some code, test it, improve it by hand, test it again and so on," says
Sebastian Thrun, a computer science professor at Stanford University and the director of
the Stanford Artificial Intelligence Laboratory. "The problem is, software is becoming
larger and larger and less and less manageable. So there's a trend to make software that
can adapt itself. This is a really big item for the future."

Thrun used several new machinelearning techniques in software that literally drove an
autonomous car 132 miles across the desert to win a $2 million prize for Stanford in a
recent contest put on by the Defense Advanced Research Projects Agency. The car
learned road-surface characteristics as it went. And machine-learning techniques gave his
team a productivity boost as well, Thrun says. "I could develop code in a day that would
have taken me half a month to develop by hand," he says.

Computer scientist Tom Mitchell, director of the Center for Automated Learning and
Discovery at Carnegie Mellon University, says machine learning is useful for the kinds of
tasks that humans do easily - speech and image recognition, for example - but that they
have trouble explaining explicitly in software rules. In machine-learning applications,
software is "trained" on test cases devised and labeled by humans, scored so it knows
what it got right and wrong, and then sent out to solve real-world cases.

Mitchell is testing the concept of having two classes of learning algorithms in essence
train each other, so that together they can do better than either would alone. For example,
one search algorithm classifies a Web page by considering the words on it. A second one
looks at the words on the hyperlinks that point to the page. The two share clues about a
page and express their confidence in their assessments.

Mitchell's experiments have shown that such "co-training" can reduce errors by more
than a factor of two. The breakthrough, he says, is software that learns from training
cases labeled not by humans, but by other software.

Stuart Russell, a computer science professor at the University of California, Berkeley, is
experimenting with languages in which programmers write code for the functions they
understand well but leave gaps for murky areas. Into the gaps go machine-learning tools,
such as artificial neural networks.

Russell has implemented his "partial programming" concepts in a language called Alisp,
an extension of Lisp. "For example, I want to tell you how to get to the airport, but I don't
have a map," he says. "So I say, 'Drive along surface streets, stopping at stop signs, until
you get to a freeway on-ramp. Drive on the freeway till you get to an airport exit sign.
Come off the exit and drive along surface streets till you get to the airport.' There are lots
of gaps left in that program, but it's still extremely useful." Researchers specify the
learning algorithms at each gap, but techniques might be developed that let the system
choose the best method, Russell says.

The computationally intensive nature of machine learning has prompted Yann LeCun, a
professor at New York University's Courant Institute of Mathematical Sciences, to invent
"convolutional networks," a type of artificial neural network that he says uses fewer
resources and works better than traditional neural nets for applications like image
recognition. With most neural nets, the software must be trained on a huge number of
cases for it to learn the many variations - size and position of an object, angle of view,
background and so on - it's likely to encounter.

LeCun's technique, which is used today in bank check readers and airport surveillance
systems, divides each image of interest into small regions - a nose, say - and then
combines them to produce higher-level features. The result is a more flexible system that
requires less training, he says.

Intelligent Design - Not

Meanwhile, research is pushing forward in a branch of machine learning called genetic
programming (GP), in which software evolves in a Darwinian fashion. Multiple versions
of a program - often thousands of them generated at random - set to work on a problem.
Most of them do poorly, but evolutionary processes pick two of the best and combine
them to produce a better generation of programs. The process continues for hundreds of
generations with no human intervention, and the results improve each time.

GP pioneer John Koza, a consulting professor in electrical engineering at Stanford, has
used the method to design circuits, controllers, optical systems and antennas that perform
as well as or better than those with patented designs. He recently was awarded a patent
for a controller design created entirely by GP.

It is, like biological evolution, a slow process. Until recently, computer power was too
expensive for GP to be practical for complex problems. Koza can do simple problems on
laptop PCs in a few hours, but the controller design took a month on a 1,000-node cluster
of Pentium processors.

"We started GP in the late 1980s, and now we have 1 million times more computer
power," Koza says. "We think sometime [within] 10 years we ought to be able to play in
the domain of real engineers."

More Related Content

PPTX
PPTX
Journeys in Human-Computer Interaction
PPT
Chpt 8 Designing With Technologies
PPTX
User Interface Trends
PDF
Introduction to Artificial Intelligence and Machine Learning
PPTX
Student Presentation: Advancement of Artificial Intelligence
PPT
Computer Engineer Powerpoint
PPTX
Artificial intelligence slides beginners
Journeys in Human-Computer Interaction
Chpt 8 Designing With Technologies
User Interface Trends
Introduction to Artificial Intelligence and Machine Learning
Student Presentation: Advancement of Artificial Intelligence
Computer Engineer Powerpoint
Artificial intelligence slides beginners

What's hot (20)

PDF
SkillsFuture Festival at NUS 2019- Machine Learning for Humans
PDF
AI for a Smaller Smarter Military SDADTC December 17 2013
PPTX
Machine Learning and Artificial Intelligence
PPTX
Human-Like Computing and Human-Computer Interaction
PDF
Data science and Artificial Intelligence
PPTX
A Brief Introduction to Machine Learning.pptx
ODP
Human-Computer Interaction
PDF
The Rise of Crowd Computing (July 7, 2016)
PPTX
Presentation1
PPTX
Storyboard moores2
PPT
Computer Engineering
PDF
Toward Better Crowdsourcing Science
PPTX
Information about computer engineering
PPTX
Let's Get Practical - A Hands-On Education Integration
PPT
10 Myths for Computer Science
PDF
SkillsFuture Festival at NUS 2019- Industrial Deep Learning and Latest AI Al...
PDF
SmartData Webinar: Commercial Cognitive Computing -- How to choose and build ...
PPTX
Storyboard moores2
PPTX
What it is like to be a Woman and a Leader in STEM
SkillsFuture Festival at NUS 2019- Machine Learning for Humans
AI for a Smaller Smarter Military SDADTC December 17 2013
Machine Learning and Artificial Intelligence
Human-Like Computing and Human-Computer Interaction
Data science and Artificial Intelligence
A Brief Introduction to Machine Learning.pptx
Human-Computer Interaction
The Rise of Crowd Computing (July 7, 2016)
Presentation1
Storyboard moores2
Computer Engineering
Toward Better Crowdsourcing Science
Information about computer engineering
Let's Get Practical - A Hands-On Education Integration
10 Myths for Computer Science
SkillsFuture Festival at NUS 2019- Industrial Deep Learning and Latest AI Al...
SmartData Webinar: Commercial Cognitive Computing -- How to choose and build ...
Storyboard moores2
What it is like to be a Woman and a Leader in STEM
Ad

Viewers also liked (11)

PDF
Mesa Overhead Doors Repair
PDF
Machine Learning, LIX004M5
PPS
Sutuhadong
PPTX
PBIS-DATA
PDF
M2020 | Omni-experience & Mobile
PDF
Fórum Business Analytics | INDEG-ISCTE
PPTX
Como gerir de forma eficaz um projeto do Portugal 2020 | Modelos e Relatórios...
PDF
Goodreads مقدمة عن برنامج وتطبيق قودريدز
PDF
Polymer
PDF
KMME 2016 - Abusalah
Mesa Overhead Doors Repair
Machine Learning, LIX004M5
Sutuhadong
PBIS-DATA
M2020 | Omni-experience & Mobile
Fórum Business Analytics | INDEG-ISCTE
Como gerir de forma eficaz um projeto do Portugal 2020 | Modelos e Relatórios...
Goodreads مقدمة عن برنامج وتطبيق قودريدز
Polymer
KMME 2016 - Abusalah
Ad

Similar to Click here to read article (20)

DOCX
Machine Learning Fundamentals.docx
PPTX
Machine-Learning-and-Robotics.pptx
PPTX
Lec1(History of Machine Learning) & Feature.pptx
PPTX
artificial intelligence ppt.pptx
PDF
Machine Learning for Absolute Beginners ( PDFDrive ).pdf
PPT
chapter1-introduction1.ppt
PDF
Python Machine Learning by Example Yuxi (Hayden) Liu
PPTX
Hr salary prediction using ml
DOCX
Executive summary
PDF
Lecture2 - Machine Learning
PDF
Artificial Intelligence-Machine Learning Explained.pdf
PDF
ARTIFICIAL INTELLIGENCE + QUANTUM COMPUTING = GREATEST TECHNOLOGICAL REVOLUTI...
PPT
Artificial intelligence and Neural Network
PPT
2.17Mb ppt
PPTX
Artificial Intelligence
PPTX
Unit I and II Machine Learning MCA CREC.pptx
PPTX
Areas of machine leanring
PDF
Encyclopedia Of Machine Learning Sammut C Webb G Eds
PPTX
Aprendizaje Automático en Astrofísica, Óptica y Otras Áreas Olac ...
PDF
Machine learning for the web explore the web and make smarter predictions usi...
Machine Learning Fundamentals.docx
Machine-Learning-and-Robotics.pptx
Lec1(History of Machine Learning) & Feature.pptx
artificial intelligence ppt.pptx
Machine Learning for Absolute Beginners ( PDFDrive ).pdf
chapter1-introduction1.ppt
Python Machine Learning by Example Yuxi (Hayden) Liu
Hr salary prediction using ml
Executive summary
Lecture2 - Machine Learning
Artificial Intelligence-Machine Learning Explained.pdf
ARTIFICIAL INTELLIGENCE + QUANTUM COMPUTING = GREATEST TECHNOLOGICAL REVOLUTI...
Artificial intelligence and Neural Network
2.17Mb ppt
Artificial Intelligence
Unit I and II Machine Learning MCA CREC.pptx
Areas of machine leanring
Encyclopedia Of Machine Learning Sammut C Webb G Eds
Aprendizaje Automático en Astrofísica, Óptica y Otras Áreas Olac ...
Machine learning for the web explore the web and make smarter predictions usi...

More from butest (20)

PDF
EL MODELO DE NEGOCIO DE YOUTUBE
DOC
1. MPEG I.B.P frame之不同
PDF
LESSONS FROM THE MICHAEL JACKSON TRIAL
PPT
Timeline: The Life of Michael Jackson
DOCX
Popular Reading Last Updated April 1, 2010 Adams, Lorraine The ...
PDF
LESSONS FROM THE MICHAEL JACKSON TRIAL
PPTX
Com 380, Summer II
PPT
PPT
DOCX
The MYnstrel Free Press Volume 2: Economic Struggles, Meet Jazz
DOC
MICHAEL JACKSON.doc
PPTX
Social Networks: Twitter Facebook SL - Slide 1
PPT
Facebook
DOCX
Executive Summary Hare Chevrolet is a General Motors dealership ...
DOC
Welcome to the Dougherty County Public Library's Facebook and ...
DOC
NEWS ANNOUNCEMENT
DOC
C-2100 Ultra Zoom.doc
DOC
MAC Printing on ITS Printers.doc.doc
DOC
Mac OS X Guide.doc
DOC
hier
DOC
WEB DESIGN!
EL MODELO DE NEGOCIO DE YOUTUBE
1. MPEG I.B.P frame之不同
LESSONS FROM THE MICHAEL JACKSON TRIAL
Timeline: The Life of Michael Jackson
Popular Reading Last Updated April 1, 2010 Adams, Lorraine The ...
LESSONS FROM THE MICHAEL JACKSON TRIAL
Com 380, Summer II
PPT
The MYnstrel Free Press Volume 2: Economic Struggles, Meet Jazz
MICHAEL JACKSON.doc
Social Networks: Twitter Facebook SL - Slide 1
Facebook
Executive Summary Hare Chevrolet is a General Motors dealership ...
Welcome to the Dougherty County Public Library's Facebook and ...
NEWS ANNOUNCEMENT
C-2100 Ultra Zoom.doc
MAC Printing on ITS Printers.doc.doc
Mac OS X Guide.doc
hier
WEB DESIGN!

Click here to read article

  • 1. SELF-TAUGHT Gary H Anthes. Computerworld. Framingham: Feb 6, 2006.Vol.40, Iss. 6; pg. 28, 1 pgs Copyright Computerworld Inc. Feb 6, 2006 [Headnote] Machine-learning techniques have been used to create self-improving software for decades, but recent advances are bringing these tools into the mainstream. BY GARY H. ANTHES FUTURE WATCH [Photograph] Stanford professor SEBASTIAN THRUN with "Stanley," the car that used machine- learning techniques to drive itself 132 miles across the desert. ATTEMPTS TO create self-improving software date to the 1960s. But "machine learning," as it's often called, has remained mostly the province of academic researchers, with only a few niche applications in the commercial world, such as speech recognition and credit card fraud detection. Now, researchers say, better algorithms, more powerful computers and a few clever tricks will move it further into the mainstream. And as the technology grows, so does the need for it. "In the past, someone would look at a problem, write some code, test it, improve it by hand, test it again and so on," says Sebastian Thrun, a computer science professor at Stanford University and the director of the Stanford Artificial Intelligence Laboratory. "The problem is, software is becoming larger and larger and less and less manageable. So there's a trend to make software that can adapt itself. This is a really big item for the future." Thrun used several new machinelearning techniques in software that literally drove an autonomous car 132 miles across the desert to win a $2 million prize for Stanford in a recent contest put on by the Defense Advanced Research Projects Agency. The car learned road-surface characteristics as it went. And machine-learning techniques gave his team a productivity boost as well, Thrun says. "I could develop code in a day that would have taken me half a month to develop by hand," he says. Computer scientist Tom Mitchell, director of the Center for Automated Learning and Discovery at Carnegie Mellon University, says machine learning is useful for the kinds of
  • 2. tasks that humans do easily - speech and image recognition, for example - but that they have trouble explaining explicitly in software rules. In machine-learning applications, software is "trained" on test cases devised and labeled by humans, scored so it knows what it got right and wrong, and then sent out to solve real-world cases. Mitchell is testing the concept of having two classes of learning algorithms in essence train each other, so that together they can do better than either would alone. For example, one search algorithm classifies a Web page by considering the words on it. A second one looks at the words on the hyperlinks that point to the page. The two share clues about a page and express their confidence in their assessments. Mitchell's experiments have shown that such "co-training" can reduce errors by more than a factor of two. The breakthrough, he says, is software that learns from training cases labeled not by humans, but by other software. Stuart Russell, a computer science professor at the University of California, Berkeley, is experimenting with languages in which programmers write code for the functions they understand well but leave gaps for murky areas. Into the gaps go machine-learning tools, such as artificial neural networks. Russell has implemented his "partial programming" concepts in a language called Alisp, an extension of Lisp. "For example, I want to tell you how to get to the airport, but I don't have a map," he says. "So I say, 'Drive along surface streets, stopping at stop signs, until you get to a freeway on-ramp. Drive on the freeway till you get to an airport exit sign. Come off the exit and drive along surface streets till you get to the airport.' There are lots of gaps left in that program, but it's still extremely useful." Researchers specify the learning algorithms at each gap, but techniques might be developed that let the system choose the best method, Russell says. The computationally intensive nature of machine learning has prompted Yann LeCun, a professor at New York University's Courant Institute of Mathematical Sciences, to invent "convolutional networks," a type of artificial neural network that he says uses fewer resources and works better than traditional neural nets for applications like image recognition. With most neural nets, the software must be trained on a huge number of cases for it to learn the many variations - size and position of an object, angle of view, background and so on - it's likely to encounter. LeCun's technique, which is used today in bank check readers and airport surveillance systems, divides each image of interest into small regions - a nose, say - and then combines them to produce higher-level features. The result is a more flexible system that requires less training, he says. Intelligent Design - Not Meanwhile, research is pushing forward in a branch of machine learning called genetic programming (GP), in which software evolves in a Darwinian fashion. Multiple versions
  • 3. of a program - often thousands of them generated at random - set to work on a problem. Most of them do poorly, but evolutionary processes pick two of the best and combine them to produce a better generation of programs. The process continues for hundreds of generations with no human intervention, and the results improve each time. GP pioneer John Koza, a consulting professor in electrical engineering at Stanford, has used the method to design circuits, controllers, optical systems and antennas that perform as well as or better than those with patented designs. He recently was awarded a patent for a controller design created entirely by GP. It is, like biological evolution, a slow process. Until recently, computer power was too expensive for GP to be practical for complex problems. Koza can do simple problems on laptop PCs in a few hours, but the controller design took a month on a 1,000-node cluster of Pentium processors. "We started GP in the late 1980s, and now we have 1 million times more computer power," Koza says. "We think sometime [within] 10 years we ought to be able to play in the domain of real engineers."