• since 05/2017

    Technische Universität Berlin

    PhD candidate in the Machine Learning Group
  • 01/2017 - 11/2017

    University of Tübingen Master thesis project

    Convolutional filter learning in deep saliency networks.
  • 04/2016 - 04/2018

    Compuccino Ltd. Machine Learning Developer

    Natural Language Processing on real-time big news data.

    Deep Learning Research Intern

    Deep Learning to achieve invariance in object recognition.
  • Spring 2016

    University of Birmingham visiting research intern

    Experimentally and theoretically investigate Reinforcement Learning in motor control.
  • Fall 2015

    Research Intern Haynes neuroimaging lab

    Brain reading from fMRI data using Machine Learning.
  • Fall 2014 - Fall 2017

    Joint M.Sc. Humboldt Universität zu Berlin and Technische Universität Berlin

    Computational Neuroscience graduate prorgamme.
  • 2011-2014

    B.Sc. Technische Universität Berlin

    Physical Engineering Science with focus on dynamical systems.


Projects

Semantic Topic Vectors
Extracting a thematic code from a large text corpus was the goal of this project. In difference to probabilistic topic models the Semantic Topic Vector (STV) model aims to minimize distance to document embeddings while producing maximally dissimilar concept representations. Experiments suggest that STV outperforms LDA-like models while producing more coherent topics.
Deep Saliency prediction
In my master thesis I investigate what features drive image-based attention: low-level representations like intensity or local orientation versus high-level features such as objects or text. I found that a surprisingly large fraction of eye fixation are explainable from a simple low-level model.
Deep Invariance
The brain's amazing capability to recognize objects invariant to their size or orientation inspired this project. A central question is if training a deep autoencoder to map a rotated image back to its unrotated template introduces rotation invariance and if this is beneficial in the subsequent classification task. Results show that this indeed decreases error rates and introduces rotation-specific network activity.
Brain Decoding
I wanted to find out how we can read brains. In hayneslab I was using fMRI activity and machine learning to predict, which intensity in every out of 48 checkerboard segments was presented. What I learned: It is really hard but some segments could be predicted with >90% accuracy.
Motor Learning
What is behind the brain's amazing capbility to learn new movements? With Prof. Miall I investigated learning from reinforcement and error information. My experiment showed, that additional reinforcement feedback was used by participants and speeds up adaptation. Noisy, individual data was modelled by a custom mechanistic model.
JokesNet
In this pet project I am trying to teach a network to tell funny jokes. For this I am creating a joke database at the moment and setting up a LSTM network to generate new oneliners. Funniest so far: "I never born?"
Still some way to go but I just started experimenting with a humor-sensitive objective function - let's see.

The Weather Project
With Tiziano Zito, me and my fellow students worked in sub-groups to analyse historical and current big weather data of Germany, scraping predictions from several forecast providers and comparing their predictive power. We found significant effects of global warming over the last 100 years and in some cases a bias towards positive forecasting.
Evolution of Grid Cells
Inspired by this paper by Kropff and Treves, I became excited about biological learning and the emergence of cost-minimizing systems. For this, I juggled with a virtual rat, weight updates and partial differential equations. Just look at how beautiful the Grid Cells are evolving into hexagonal grids, which are crucial for spatial orientation!