There is a new project we are beginning to look into which analyzes today’s neural networks in terms of stability and plasticity.
More explicitly, we evaluate how well these networks can cope with changes to their weights and how well they can adapt to new information. Some preliminary results suggest that if weights in lower layers are perturbed, this has a more severe effect on performance than if higher layers are perturbed. This has a nice correlation to neuroscience where it is assumed that our hierarchically lower cortical layers in the visual cortex remain rather fixed over the years.
Update: we just uploaded a version to arXiv (https://arxiv.org/abs/1703.08245) which is currently under review at ICML.
I am investigating what role recurrency plays in vision as opposed to purely feed-forward connections. The brain has connections all over the place but yet most of our today’s machine learning algorithms in object recognition operate in only a feed-forward way.
So why is recurrency important?
One application that we have found is the recognition of occluded objects (see Publications). Here, recurrency enables the integration of spatial information and it also allows for fewer weights because this integration seems to be similar across timesteps.
I am also investigating whether recurrency could account for an effect of visual context where we can recognize difficult objects with more ease when knowing about their surroundings (e.g. a bank vault door in isolation is likely difficult to recognize but in the context of a bank with cashiers and money, it might be much simpler).
Social information platform to provide refugees with local information in an app.
The content is provided by local authorities and helper organizations of which there are over 100 already.
Document- & Workflow-Management System that digitalizes all file types of a firm and securely stores them on a legally accepted archive server to make paper obsolete.
Optical Character Recognition is used to convert scans and fax.