Showing posts with label abhinav gupta. Show all posts
Showing posts with label abhinav gupta. Show all posts

Friday, June 26, 2015

Deep down the rabbit hole: CVPR 2015 and beyond

CVPR is the premier Computer Vision conference, and it's fair to think of it as the Olympics of Computer Vision Research. This year it was held in my own back yard -- less than a mile away from lovely Cambridge, MA!  Plenty of my MIT colleagues attended, but I wouldn't be surprised if Google had the largest showing at CVPR 2015. I have been going to CVPR almost every year since 2004, so let's take a brief tour at what's new in the exciting world of computer vision research.




A lot has changed. Nothing has changed. Academics used to be on top, defending their Universities and the awesomeness happening inside their non-industrial research labs. Academics are still on top, but now defending their Google, Facebook, Amazon, and Company X affiliations. And with the hiring budget to acquire the best and a heavy publishing-oriented culture, don't be surprised if the massive academia exodus continues for years to come. It's only been two weeks since CVPR, and Google has since then been busy making ConvNet art, showing the world that if you want to do the best Deep Learning research, they are King.

An army of PhD students and Postdocs simply cannot defeat an army of Software Engineers and Research Scientists. Back in the day, students used to typically depart after a Computer Vision PhD (there used to be few vision research jobs and Wall Street jobs were tempting). Now the former PhD students run research labs at big companies which have been feverishly getting into vision. It seems there aren't enough deep experts to fill the deep demand.

Datasets used to be the big thing -- please download my data!  Datasets are still the big thing -- but we regret to inform you that your university’s computational resources won’t make the cut (but at Company X we’re always hiring, so come join us, and help push the frontier of research together).

Related Article: Under LeCun's Leadership, Facebook's AI Research Lab is beefing up their research presence

If you want to check out the individual papers, I recommend Andrej Karpathy's online navigation tool for CVPR 2015 papers or take a look at the vanilla listing of CVPR 2015 papers on the CV foundation websiteZoya Bylinskii, an MIT PhD Candidate, also put together a list of interesting CVPR 2015 papers.

The ConvNet Revolution: There's a pre-trained network for that

Machine Learning used to be the Queen. Machine Learning is now the King. Machine Learning used to be shallow, but today's learning approaches are so deep that the diagrams barely fit on a single slide. Grad students used to pass around jokes about Yann LeCun and his insistence that machine learning will one day do the work of the feature engineering stage. Now it seems that the entire vision community gets to ignore you when you insist that “manual feature engineering” is going to save the day. Yann LeCun gave a keynote presentation with the intriguing title "What's wrong with Deep Learning" and it seems that Convolutional Neural Networks (also called CNNs or ConvNets) are everywhere at CVPR.




It used to be hard to publish ConvNet research papers at CVPR, it's now hard to get a CVPR paper if you didn't at least compare against a ConvNet baseline. Got a new cool problem? Oooh, you didn’t try a ConvNet-based baseline? Well, that explains why nobody cares.

But it's not like the machines are taking over the job of the vision scientist. Today's vision scientist is much more of an applied machine learning hacker than anything else, and because of the strong CNN theme, it is much easier to understand and re-implement today's vision systems. What we're seeing at CVPR is essentially a revisiting of the classic problems like segmentation and motion, using this new machinery. As Samson Timoner phrased it at the local Boston Vision Meetup, when Mutual Information was popular, the community jumped on that bandwagon -- it's ConvNets this time around. But it's not just a trend, the non-CNN competition is getting crushed.


Figure from Bharath Hariharan's Hypercolumns CVPR 2015 paper on segmentation using CNNs


There's still plenty to be done by a vision scientist, and a solid formal education in mathematics is more important than ever. We used to train via gradient descent. We still train via gradient descent. We used to drink Coffee, now we all drink Caffe. But behind the scenes, it is still mathematics.

Related Page: Caffe Model Zoo where you can download lots of pretrained ConvNets

Deep down the rabbit hole


CVPR 2015 reminds of the pre-Newtonian days of physics. A lot of smart scientists were able to predict the motions of objects using mathematics once the ingenious Descartes taught us how to embed our physical thinking into a coordinate system. And it's pretty clear that by casting your computer vision problem in the language of ConvNets, you are going to beat just about anybody doing computer vision by hand. I think of Yann LeCun (one of the fathers of Deep Learning) as a modern day Descartes, only because I think the ground-breaking work is right around the corner. His mental framework of ConvNets is like a much needed coordinate system -- we might not know what the destination looks like, but we now know how to build a map.

Deep Networks are performing better every month, but I’m still waiting for Isaac to come in and make our lives even easier. I want a simplification. But I'm not being pessimistic -- there is a flurry of activity in the ConvNet space for a very good reason (in case you didn't get to attend CVPR 2015), so I'll just be blunt: ConvNets fuckin' work! I just want the F=ma of deep learning.


Open Source Deep Learning for Computer Vision: Torch vs Caffe

CVPR 2015 started off with some excellent software tutorials on day one.  There is some great non-alpha deep learning software out there and it has been making everybody's life easier.  At CVPR, we had both a Torch tutorial and a Caffe tutorial.  I attended the DIY Deep Learning Caffe tutorial and it was a full house -- standing room only for slackers like me who join the party only 5 minutes before it starts. Caffe is much more popular that Torch, but when talking to some power users of Deep Learning (like +Andrej Karpathy and other DeepMind scientists), a certain group of experts seems to be migrating from Caffe to Torch.



Caffe is developed at Berkeley, has a vibrant community, Python bindings, and seems to be quite popular among University students. Prof. Trevor Darrell at Berkeley is even looking for a Postdoc to help the Caffe effort. If I was a couple of years younger and a fresh PhD, I would definitely apply.

Instead of following the Python trend, Torch is Lua-based. There is no need for an interpreter like Matlab or Python -- Lua gives you the magic console. Torch is heavily used by Facebook AI Research Labs and Google's DeepMind Lab in London.  For those afraid of new languages like Lua, don't worry -- Lua is going to feel "easy" if you've dabbled in Python, Javascript, or Matlab. And if you don't like editing protocol buffer files by hand, definitely check out Torch.

It's starting to become clear that the future power of deep learning is going to come with its own self-contained software package like Caffe or Torch, and not from a dying breed of all-around tool-belts like OpenCV or Matlab. When you share creations made in OpenCV, you end up sharing source files, but with the Deep Learning toolkits, you end up sharing your pre-trained networks.  No longer do you have to think about a combination of 20 "little" algorithms for your computer vision pipeline -- you just think about which popular network architecture you want, and then the dataset.  If you have the GPUs and ample data, you can do full end-to-end training.  And if your dataset is small/medium, you can fine-tune the last few layers. You can even train a linear classifier on top of the final layer, if you're afraid of getting your hands dirty -- just doing that will beat the SIFTs, the HOGs, the GISTs, and all that was celebrated in the past two decades of computer vision.

Related Article: Torch vs Theano on fastml.com
Related Code: Andrea Vedaldi's MatConvNet Deep Learning Library for MATLAB users

The way in which ConvNets are being used at CVPR 2015 makes me feel like we're close to something big.  But before we strike gold, ConvNets still feel like a Calculus of Shadows, merely "hoping" to get at something bigger, something deeper, and something more meaningful. I think the flurry of research which investigates visualization algorithms for ConvNets suggests that even the network architects aren't completely sure what is happening behind the scenes.

The Video Game Engine Inside Your Head: A different path towards Machine Intelligence


Josh Tenenbaum gave an invited talk titled The Video Game Engine Inside Your Head at the Scene Understanding Workshop on the last day of the CVPR 2015 conference. You can read a summary of his ideas in a short Scientific American article. While his talk might appear to be unconventional by CVPR standards, it is classic Tenenbaum. In his world, there is no benchmark to beat, no curves to fit to shadows, and if you allow my LeCun-Descartes analogy, then in some sense Prof. Tenenbaum might be the modern day Aristotle. As Prof. Jianxiong Xiao introduced Josh with a grand intro, he was probably right -- this is one of the most intelligent speakers you can find.  He speaks 100 words a second, you can't help but feel your brain enlarge as you listen.

One of Josh's main research themes is going beyond the shadows of image-based recognition.  Josh's work is all about building mental models of the world, and his work can really be thought of as analysis-by-synthesis. Inside his models is something like a video game engine, and he showed lots of compelling examples of inferences that are easy for people, but nearly impossible for the data-driven ConvNets of today.  It's not surprising that his student is working at Google's DeepMind this summer.

A couple of years ago, Probabilistic Graphical Models (the marriage of Graph Theory and Probabilistic Methods) used to be all the rage.  Josh gave us a taste of Probabilistic Programming, and while we're not yet seeing these new methods dominate the world of computer vision research, keep your eyes open. He mentioned a recent Nature paper (citation below) from another well respected machine intelligence research, which should keep the trendsetters excited for quite some time. Just take a look at the bad-ass looking Julia code below:

Probabilistic machine learning and artificial intelligence. Zoubin Ghahramani. Nature 521, 452–459 (28 May 2015) doi:10.1038/nature14541




To see some of Prof. Tenenbaum's ideas in action, take a look at the following CVPR 2015 paper, titled Picture: A Probabilistic Programming Language for Scene Perception. Congrats to Tejas D. Kulkarni, the first author, an MIT student, who got the Best Paper Honorable Mention prize for this exciting new work. Google DeepMind, you're going to have one fun summer.




Object Detectors Emerge in Deep Scene CNNs

There were lots of great presentation as the Scene Understanding Workshop, and another talk that truly stood out was about a new large-scale dataset (MIT Places) and a thorough investigation of what happens when you train with scenes vs. objects.



Antonio Torralba from MIT gave the talk about the Places Database and an in-depth analysis of what is learned when you train on object-centric databases like ImageNet vs. Scene-scentric databases like MIT Places. You can check out "Object Detectors Emerge" slides or their ArXiv paper to learn more. Great work by an upcoming researcher, Bolei Zhou!

Overheard at CVPR: ArXiv Publishing Frenzy & Baidu Fiasco 


In the long run, the recent trend of rapidly pushing preprints to ArXiv.org is great for academic and industry research alike. When you have a large collection of experts exploring ideas at very fast rates, waiting 6 months until the next conference deadline just doesn't make sense.  The only downside is that it makes new CVPR papers feel old. It seems like everybody has already perused the good stuff the day it went up on ArXiv. But you get your "idea claim" without worrying that a naughty reviewer will be influenced by your submission. Double blind reviewing, get ready for a serious revamp.  We now know who's doing what, significantly before publication time.  Students, publish-or-perish just got a new name. Whether the ArXiv frenzy is a good or a bad thing, is up to you, and probably more a function of your seniority than anything else. But the CV buzz is definitely getting louder and will continue to do so.

The Baidu cheating scandal might appear to be big news for outsiders just reading the Artificial Intelligence headlines, but overfitting to the testing set is nothing new in Computer Vision. Papers get retracted, grad students often evaluate their algorithms on test sets too many times, and the truth is that nobody's perfect.  When it's important to be #1, don't be surprised that your competition is being naughty. But it's important to realize the difference between ground-breaking research and petty percentage chasing. We all make mistakes, and under heavy pressure, we're all likely to show our weaknesses.  So let's laugh about it.  Let's hire the best of the best, encourage truly great research, and stop chasing percentages.  The truth is that a lot of the top performing methods are more similar than different.


Conclusion
CVPR has been constantly growing in attendance. We now have Phd Students, startups, Professors, recruiters, big companies, and even undergraduates coming to the show. Will CVPR become the new SIGGRAPH?

CVPR attendance plot from Changbo Hu


ConvNets are here to stay, but if we want ConvNets to be more than than a mere calculus of shadows, there's still ample work do be done. Geoff Hinton's capsules keep popping up during midnight discussions. "I want to replace unstructured layers with groups of neurons that I call 'capsules' that are a lot more like cortical columns" -- Geoff Hinton during his Reddit AMA. A lot of people (like Prof. Abhinav Gupta from CMU) are also talking about unsupervised CNN training, and my prediction is that learning large ConvNets from videos without annotations is going to be big at next year's CVPR.

Most importantly, when the titans of Deep Learning get to mention what's wrong with their favorite methods, I only expect the best research to follow. Happy computing and remember, never stop learning.


Wednesday, July 03, 2013

[CVPR 2013] Three Trending Computer Vision Research Areas

As I walked through the large poster-filled hall at CVPR 2013, I asked myself, “Quo vadis Computer Vision?" (Where are you going, computer vision?)  I see lots of papers which exploit last year’s ideas, copious amounts of incremental research, and an overabundance of off-the-shelf computational techniques being recombined in seemingly novel ways.  When you are active in computer vision research for several years, it is not rare to find oneself becoming bored by a significant fraction of papers at research conferences.  Right after the main CVPR conference, I felt mentally drained and needed to get a breath of fresh air, so I spent several days checking out the sights in Oregon.  Here is one picture -- proof that the CVPR2013 had more to offer than ideas!



When I returned from sight-seeing, I took a more circumspect look at the field of computer vision.  I immediately noticed that vision research is actually advancing and growing in a healthy way.  (Unfortunately, most junior students have a hard determining which research papers are actually novel and/or significant.)  A handful of new research themes arise each year, and today I’d like to briefly discuss three new computer vision research themes which are likely to rise in popularity in the foreseeable future (2-5 years).

1) RGB-D input data is trending.  

Many of this year’s papers take a single 2.5D RGB-D image as input and try to parse the image into its constituent objects.  The number of papers doing this with RGBD data is seemingly infinite.  Some other CVPR 2013 approaches don’t try to parse the image, but instead do something else like: fit cuboids, reason about affordances in 3D, or reason about illumination.  The reason why such inputs are becoming more popular is simple: RGB-D images can be obtained via cheap and readily available sensors such as Microsoft’s Kinect.  Depth measurements used to be obtained by expensive time of flight sensors (in the late 90s and early 00s), but as of 2013, $150 can buy you one these depth sensing bad-boys!  In fact, I had bought a Kinect just because I thought that it might come in handy one day -- and since I’ve joined MIT, I’ve been delving into the RGB-D reconstruction domain on my own.  It is just a matter of time until the newest iPhone has an on-board depth sensor, so the current line of research which relies on RGB-D input is likely to become the norm within a few years.










2) Mid-level patch discovery is a hot research topic.
Saurabh Singh from CMU introduced this idea in his seminal ECCV 2012 paper, and Carl Doersch applied this idea to large-scale Google Street-View imagery in the “What makes Paris look like Paris?” SIGGRAPH 2012 paper.  The idea is to automatically extract mid-level patches (which could be objects, object parts, or just chunks of stuff) from images with the constraint that those are the most informative patches.  Regarding the SIGGRAPH paper, see the video below.






Unsupervised Discovery of Mid-Level Discriminative Patches Saurabh Singh, Abhinav Gupta, Alexei A. Efros. In ECCV, 2012.








Carl DoerschSaurabh Singh, Abhinav Gupta, Josef Sivic, and Alexei A. Efros. What Makes Paris Look like Paris? In SIGGRAPH 2012. [pdf]

At CVPR 2013, it was evident that the idea of "learning mid-level parts for scenes" is being pursued by other top-tier computer vision research groups.  Here are some CVPR 2013 papers which capitalize on this idea:

Blocks that Shout: Distinctive Parts for Scene Classification. Mayank Juneja, Andrea Vedaldi, CV Jawahar, Andrew Zisserman. In CVPR, 2013. [pdf]

Representing Videos using Mid-level Discriminative Patches. Arpit Jain, Abhinav Gupta, Mikel Rodriguez, Larry Davis. CVPR, 2013. [pdf]

Part Discovery from Partial Correspondence. Subhransu Maji, Gregory Shakhnarovich. In CVPR, 2013. [pdf]

3) Deep-learning and feature learning are on the rise within the Computer Vision community.
It seems that everybody at Google Research is working on Deep-learning.  Will it solve all vision problems?  Is it the one computational ring to rule them all?  Personally, I doubt it, but the rising presence of deep learning is forcing every researcher to brush up on their l33t backprop skillz.  In other words, if you don't know who Geoff Hinton is, then you are in trouble.

Tuesday, December 06, 2011

Graphics meets Big Data meets Machine Learning

We've all played Where's Waldo as children, and at least for me it was quite a fun game.  So today let's play an image-based Big Data version of Where's Waldo.  I will give you a picture, and you have to find it in a large collection of images!  This is a form of image retrieval, and this particular formulation is also commonly called "image matching."


The only catch is that you are only given one picture, and I am free to replace the picture with a painting or a sketch.  Any two-dimensional pattern is a valid query image, but the key thing to note is that there is only a single input image. Life would be awesome if Google's Picasa had this feature built in!


The classical way of solving this problem is via a brute-force nearest neighbor algorithm, an algorithm which won't match pixel pattern directly, but an algorithm which will also use a state-of-the-art image descriptor such as GIST for comparison.  Back in 2007, at SIGGRAPH, James Hays and Alexei Efros have shown this to work quite well once you have a very large database of images!  But the reason why the database had to be so large is because a naive Nearest Neighbor algorithm is actually quite dumb.  The descriptor might be cleverer than matching raw pixel intensities, but for a machine, an image is nothing but a matrix of numbers, and nobody told the machine which patterns in the matrix are meaningful and which ones aren't.  In short, the brute-force algorithm works if there are similar enough images such that all parts of the input image will match a retrieved image.  But ideally we would like the algorithm to get better matches by automatically figuring out which parts of the query image are meaningful  (e.g., the fountain in the painting) and which parts aren't (e.g., the reflections in the water).

A modern approach to solve this issue is to collect a large set of related "positive images" and a large set of un-related "negative images" and then train a powerful classifier which can hopefully figure out the meaningful bits of the image. But in this approach the problem is twofold.  First, working with a single input image it is not clear whether standard machine learning tools will have a chance of learning anything meaningful.  The second issue, a significantly worse problem, is that without a category label or tag, how are we supposed to create a negative set?!?  Exemplar-SVMs to the rescue!  We can use a large collection of images from the target domain (the domain we want to find matches from) as the negative set -- as long as the "negative set" contains only a small fraction of potentially related images, learning a linear SVM with a single positive still works.




Here is an excerpt from a Techcrunch article which summarizes the project concisely:

"Instead of comparing a given image head to head with other images and trying to determine a degree of similarity, they turned the problem around. They compared the target image with a great number of random images and recorded the ways in which it differed the most from them. If another image differs in similar ways, chances are it’s similar to the first image. " -- Techcrunch


Abhinav ShrivastavaTomasz MalisiewiczAbhinav GuptaAlexei A. EfrosData-driven Visual Similarity for Cross-domain Image Matching. In SIGGRAPH ASIA, December 2011. Project Page



Here is a short listing of some articles which mention our research (thank Abhinav!).