Recurrent computations for the recognition of occluded objects (humans + deep nets)

Finally out (in PNAS)! Our paper on recurrent computations for the recognition of occluded objects, in humans as well as models. Feed-forward alone doesn’t seem to cut it, but attractor dynamics help; similarly the brain requires recurrent processing to untangle highly occluded images.

http://www.pnas.org/content/early/2018/08/07/1719397115/

We have some pretty visualization gifs in the github, along with the code: https://github.com/kreimanlab/occlusion-classification

 

EDIT: MIT News covered our work, along with a video of us giving the intuition behind it: http://news.mit.edu/2018/mit-martin-schrimpf-advancing-machine-ability-recognize-partially-seen-objects-0920