Brain-Score: Which Artificial Neural Network for Object Recognition is most Brain-Like?

The field of Machine Learning is doing pretty well at quantifying its goals and progress, yet Neuroscience is lagging behind in that regard — current claims are often qualitative and not rigorously compared with other models across a wider spectrum of tasks.

Brain-Score is our attempt to speed up progress in Neuroscience by providing a platform where models and data can compete against each other: https://www.biorxiv.org/content/early/2018/09/05/407007

Deep neural networks trained on ImageNet classification do the best on our current set of benchmarks and there is a lot of criticism about the mis-alignment between these networks and the primate ventral stream: mapping between the many layers and brain regions is unclear, the models are too large and are just static feed-forward processors.
We thus created a more brain-like model, “CORnet”, which does well on Brain-Score with only four areas and recurrent processing: https://www.biorxiv.org/content/early/2018/09/04/408385

EDIT: Science Magazine wrote a news piece about the use of deep neural networks as models of the brain with the final paragraphs devoted to Brain-Score: http://sciencemag.org/news/2018/09/smarter-ais-could-help-us-understand-how-our-brains-interpret-world

Recurrent computations for the recognition of occluded objects (humans + deep nets)

Finally out (in PNAS)! Our paper on recurrent computations for the recognition of occluded objects, in humans as well as models. Feed-forward alone doesn’t seem to cut it, but attractor dynamics help; similarly the brain requires recurrent processing to untangle highly occluded images.

http://www.pnas.org/content/early/2018/08/07/1719397115/

We have some pretty visualization gifs in the github, along with the code: https://github.com/kreimanlab/occlusion-classification

 

EDIT: MIT News covered our work, along with a video of us giving the intuition behind it: http://news.mit.edu/2018/mit-martin-schrimpf-advancing-machine-ability-recognize-partially-seen-objects-0920

Searching for non-intuitive architectures

Summer Internship work is out in ICLR! Automatic architecture search finds non-intuitive (at least to me) architecture including sine curves and division.

I’m really glad to have worked with a fantastic team at Salesforce Research, most closely with Stephen Merity and Richard Socher.

Blog: https://einstein.ai/research/domain-specific-language-for-automated-rnn-architecture-search

Paper + Reviews: https://openreview.net/forum?id=SkOb1Fl0Z

Master’s Thesis: Brain-inspired Recurrent Neural Algorithms for Advanced Object Recognition

It’s done! I finished my Master’s Thesis which focused on the idea and implementation of recurrent neural networks in computer vision, inspired by findings in neuroscience. The two main applications of this technique shown here are the recognition of partially occluded objects and the integration of context cues.

Here’s the link: Brain-inspired Recurrent Neural Algorithms for Advanced Object Recognition – Martin Schrimpf

On the robustness of neural networks

There is a new project we are beginning to look into which analyzes today’s neural networks in terms of stability and plasticity.
More explicitly, we evaluate how well these networks can cope with changes to their weights and how well they can adapt to new information. Some preliminary results suggest that if weights in lower layers are perturbed, this has a more severe effect on performance than if higher layers are perturbed. This has a nice correlation to neuroscience where it is assumed that our hierarchically lower cortical layers in the visual cortex remain rather fixed over the years.

Update: we just uploaded a version to arXiv (https://arxiv.org/abs/1703.08245) which is currently under review at ICML.

NIPS Brains&Bits Poster

Just presented our work on Recurrent Computations for Pattern Completion at the NIPS 2016 Brains & Bits Workshop!

Here’s the poster that I presented.

It was an awesome conference, lots of new work and amazing individuals.
Here’s a really short summary, but I highly recommend going through the papers and talks:

  • unsupervised learning and GANs are hot
  • learning to learn is becoming hot
  • new threshold for deep: 1202 layers

TensorFlow seminar paper on arXiv

After some requests, I have uploaded my (really short) analysis of Google’s TensorFlow to arXiv: https://arxiv.org/abs/1611.08903.

It is really just a small seminar paper, the main finding is that while using any Machine Learning framework is generally a good idea, TensorFlow has a really good chance of sticking around due to its already widespread usage within Google and research coupled with a growing community.

Scalable Database Concurrency Control using Transactional Memory

Although it’s been a while, I thought I’d upload my Bachelor’s Thesis for others to read: Scalable Database Concurrency Control using Transactional Memory.pdf.

The work consists of two parts:
Part 1 analyzes the constraints of Hardware Transactional Memory (HTM) and identifies data structures that profit most of this technique.
Part 2 attempts different implementations of HTM in MySQL’s InnoDB storage component and evaluates the results.

Page 1 Page 2