3.19.2016

Deep Learning

When I was a PhD student in Educational Research, one of the most important pedagogical concepts imprinted on us (for both theoretical and practical reasons) was the notion of "deep learning". This concept was developed by professors Marton and Säljö in the late 70-ies and I believe it's spread among at least Swedish teachers. They noticed two different approaches to learning among students: those who adopted a "deep" approach to learning and engaged in an active search for meaning, and those who choosed a "surface" approach focusing on memorizing the parts of the material they thought they might be questioned about later.

The Wikipedia entry for deep learning leads to the more recent concept of a certain part of a broader family of machine learning methods based on learning representations of data. The approach is founded on going through heaps of data, through using multiple levels of representations in order to find patterns. This is very far from the pedagogical concept, why it makes me a bit confused.

What the two uses have in common is the association with depth as a positive aspect when it comes to learning. To have a deep knowledge of something is often considered good, even if it sometimes can lead to you becoming a nerd. However, very few people I know would like to be considered as shallow.
From Berlin 2015
Some consider "deep learning" to be nothing more than a re-branding of neural networksWired stated that 2015 was the year when Artifical Intelligence finally entered the everyday world, pointing at examples such as Facebook’s face recognition, Microsoft’s Skype translation, Google’s Android voice recognition and improvement of it's search engine, and Twitter's pornography identification (to give the users a chance to block it).

The hype was reinforced by Google going on a shopping spree buying the London-based DeepMind and the Oxford University spin-off companies Dark Blue Labs and Vision Factory in 2014. It's collection of many machine learning researchers made several people raise their eybrows, among them Antonio Regalado, senior editor for biomedicine for MIT Technology Review:

"Certainly, large companies wouldn’t be spending so heavily to monopolize talent in artificial intelligence unless they believed that these computer brains will give them a powerful edge. It may sound like a movie plot, but perhaps it’s even time to wonder what the first company in possession of a true AI would do with the power that it provided." 

At least one prominent researcher has kept a foot in academia: Yann LeCun. He is the head of the Artificial Intelligence Research Lab at Facebook and professor at New York University. He takes his role as “public intellectual” seriously and often speak about his work, as in this interview for the IEEE Spectrum journal. 

What might become a nice combination of the two uses of "deep learning" is the recent development when it comes to open source in this area. Google has provided access to the software engine that drives its deep learning services and Facebook has open sourced designs for the custom-built hardware server that drives its deep learning work. Elon Musk has launched the nonprofit initiative OpenAI.

Google is also offering a free online class about deep learning. It would of course be interesting to see what kind of learning approach the students take on that subject and what kind of teaching approach is used.

Meanwhile the debate is still ongoing regarding to what extent machine learning should try to mimic human learning and if it's possible to make computers truly understand human language. Well, since it's not obvious if we humans understand anything at all or each other, it may take a while.
From Berlin 2015

1 comment:

jeppen said...

I took AI courses as a CS student in the late 90-ies, and around that time, the world chess champion was beat by IBMs Deep Blue, in a fairly brute force manner. But I haven't felt much in the way of "buzz" regarding AI since then, until maybe half a year ago. Suddenly, its all the rage again. Perhaps it's justifiable this time, since AI seems to make big strides on multiple fronts.

A year ago, the Chinese game of Go was still something computers still weren't very good at, and they couldn't hope to beat a professional without being handed handicap stones. Then AlphaGo came along, built by Google Deep Mind. And suddenly, a month ago, it beat the World Champion, Lee Sedol, in a five game match, making moves that might actually enhance human strategies and understanding of the game. And it had learned its skills, enabling it to exhibit what looks like innovation and creativity, simply by playing itself over and over again. Quite amazing.

Perhaps the time has simply come and the hardware has become fast enough. Moore's law in the traditional sense is slowly coming to an end, but a consumer commodity, the graphics card, is still improving by leaps and bounds. And it seems the simple but massively parallel instructions used to calculate and render realistic graphics is extremely well suited to create neural networks. While Deep Blue was an multimillion dollar IBM super computer with a bunch of custom chips, AlphaGo is a distributed computer with more ordinary CPUs and grapics chips. Still an expensive machine, to be sure, but mostly assembled from commodity parts. We might actually have gamer youth and more or less violent shooter games to thank for the current progress of AI.

It's almost a bit scary. What will happen when computers are simply smarter, and can build ever-better versions of itself? As I watched the games unfold, the dreaded singularity just came so much closer to reality. But at least until then, AIs will be of great help for humanity. Soon, I guess, we'll have simultaneous translation capabilities in our smartphones. That and other stuff that can bring us closer together, help us improve policy and revolutionize infrastructure. And yes, we might want to do some deep learning ourselves to keep up. Perhaps the AIs can help with that too.