NeuroAI / computational cognitive neuroscience / ML systems
Johannes Roth
I'm finishing my PhD at the intersection of NeuroAI and Computational Cognitive Neuroscience at the Max Planck Institute in Leipzig, where my research focuses on datasets used for comparing neural network models with the human brain.
Before academia, I spent several years working on applied data and ML problems - building recommendation systems, image processing pipelines, and ML infrastructure in production. What drives me is solving hard technical problems that actually matter: whether that's making neuroimaging experiments more efficient or providing a model that changes how a product works.
Experience
PhD in Computational Cognitive Neuroscience
Max Planck Institute for Human Cognition and Brain Sciences & University of Gießen
Researching NeuroAI approaches for efficient experimental design in visual neuroscience: using modern computer vision models and large-scale naturalistic image spaces to choose better stimuli, compare model and brain representations, and improve fMRI dataset coverage. Built the ML/data infrastructure behind these workflows, including public datasets and re:vision challenge infrastructure.
Python · PyTorch · CLIP · scikit-learn · SLURM · Docker
Research Assistant - ML in Medicine
ScaDS.AI Dresden/Leipzig
Engineered a multi-plane UNet++ ensemble for Glioblastoma segmentation (BraTS 2021), reaching 0.90 Dice for whole tumor. Developed attention-based mortality prediction models with epistemic uncertainty estimation.
PyTorch · FT-Transformer · SAINT · UNet++
Data Scientist / ML Engineer
CHECK24 (Travel Vertical)
Designed and deployed a Flask + Redis image-processing microservice for fast ML inference over 20M+ hotel images. Built deduplication, retrieval, classification, quality scoring, recommender tuning, and Grafana monitoring workflows.
Python · PyTorch · Flask · Redis · Hyperopt · BigQuery · Grafana
Show other experience Hide other experience education, freelance, earlier ML roles
B.Sc. Business Information Systems & M.Sc. Computer Science
Leipzig University
M.Sc. grade 1.2 (Distinction). Focused on ML, data analysis, and medical image processing. Thesis on using GANs to synthesize images that maximally activate specific brain regions.
Full-stack Developer
Kimetric UG
Implemented two academic websites using Django, Nginx, and Gunicorn. Configured Linux hosting environments and automated deployment scripts.
Django · Nginx · Gunicorn · Linux · CI/CD
Data Scientist
Webdata Solutions GmbH (now Vistex)
Revamped a product-matching pipeline with a neural-network approach trained on self-collected web-scraped datasets, increasing matching accuracy from <50% to 92%. Built data collection, training, evaluation, and interpretability workflows.
Python · TensorFlow · PostgreSQL · AWS
Publications
- 2025 How to sample the world for understanding the visual system
Vision neuroscience runs on large fMRI datasets, but nobody had checked whether the stimulus images in these datasets actually cover what humans see in the real world. We built LAION-natural -a reference distribution of ~120M naturalistic photographs filtered from 2 billion LAION images using a CLIP-based classifier trained on 25k actively sampled labels. Then we measured coverage: ~50% of the visual-semantic space is missing from the two most widely used datasets (NSD and THINGS).
The good news: you don't need millions of images to fix this. In both simulations and real fMRI data, out-of-distribution generalization saturates at 5-10k samples - as long as you draw them from a diverse enough pool. We compared seven sampling strategies (random, stratified, k-Means, Core-Set, effective dimensionality optimization, active learning) and found that pool diversity matters far more than which algorithm you use to sample from it.
The pipeline processes billions of images using CLIP embeddings, Annoy indices for nearest-neighbor search, mini-batch k-Means clustering, and Ridge regression encoding models - all at a scale that runs on a university HPC cluster, not a cloud budget.
- 2025 Ten principles for reliable, efficient, and adaptable coding
Most scientists learn to code informally - picking things up as they go, optimizing for "does it run?" over "will anyone else understand this?" This paper introduces a structured framework for writing better research code, built around the idea that researchers naturally switch between quick prototyping and careful development - and that being deliberate about which mode you're in makes all the difference.
The ten principles span three tiers: organizing code (standardized project structures, version control, automation), writing reusable code (testing, documentation, clean interfaces), and collaborating (code review systems, shared knowledge bases, lab-wide standards). Already at 22k+ accesses, it clearly hit a nerve - these are problems every computational lab deals with but rarely talks about explicitly.
- 2025 Fine-grained image and category information in ventral visual pathway
Show older publications Hide older publications fMRI methods, GAN-based neuroscience, brain tumor segmentation
- 2023 High stimulus presentation rates for fMRI
- 2021 Preferred stimuli for individual voxels in the human visual system
You can't show the brain every possible image, so how do you figure out what a specific patch of visual cortex actually responds to? We trained a convolutional neural network end-to-end on fMRI data from a subject watching naturalistic movies - no ImageNet pretraining, just raw stimulus-response pairs. Then we used BigGAN to synthesize images that maximally activate individual voxels via gradient ascent through the model.
Early visual areas (V1-V3) preferred gratings in small receptive fields, as expected. More interesting: FFA showed preference for faces but also oval shapes and vertical symmetry, while PPA preferred places plus horizontal lines and high spatial frequencies. An SVM classifier could distinguish FFA vs. PPA preferred stimuli from their GAN latent vectors at 87% accuracy, confirming the approach produces meaningfully different outputs per region. This was one of the first demonstrations of GAN-based preferred stimulus synthesis for the human visual system.
- 2021 Multi-plane UNet++ Ensemble for Glioblastoma Segmentation
Datasets & Tools
Vision research needs naturalistic photographs, but web-scraped datasets like LAION are full of screenshots, memes, ads, and generated images. We scored all 2.1 billion images in ReLAION-2B for "naturalness" using a CLIP-based classifier, then extracted and published ViT-H/14 embeddings for the ~500M most photographic ones. The result is a 167GB dataset on Hugging Face that lets researchers query half a billion images by visual similarity without downloading a single pixel.
An open-source Python toolbox for extracting and comparing image representations from deep neural networks. Supports 100+ models across torchvision, timm, CLIP, self-supervised models (DINO, MAE, SimCLR), and more. Also includes tools for aligning DNN representations with human similarity judgments via RSA and CKA. I'm the third-largest contributor to the project, which has 460k+ PyPI downloads and is used across vision and cognitive neuroscience labs.
Recognition / service
Get in Touch
Happy to chat about research, potential collaborations, or opportunities. Email is best. Also on LinkedIn, GitHub, Hugging Face, and Google Scholar.