About me
I am a PhD Student at the University of Tübingen and the International Max Planck Research School (IMPRS) for Intelligent Systems (IS). I am co-supervised by Oliver Bringmann and Wieland Brendel. Before my PhD, I have obtained a B.Sc. and a M.Sc. in Physics at the Karlsruhe Institute of Technology. During my Master’s, I have participated in several research projects centered around numerical optics, and my Master thesis has been published at Nature Communications.
During my PhD, I have been working on improving the generalization capabilities of Deep Neural Networks beyond their training distribution. I have explored how we can make vision models more robust to distribution shifts. Beyond investigating different robustification methods, I have also analyzed the benefits of continual learning when the model is allowed to adapt to the encountered distribution shifts.
In my recent works, I am trying to understand how the OOD generalization capabilities of popular foundation models trained on large-scale datasets can be benchmarked. I also find it intriguing to investigate how multi-modality affects the learned representations and their generalizability.
I have completed a research internship at FAIR under the guidance of Ari Morcos and Kamalika Chaudhuri. During that internship, I have been working on pruning of large-scale datasets for CLIP training. This work has been published at ICLR 2024. I have also completed an internship as a student researcher at Google Deepmind under the guidance of Olivier Hénaff as a member of the Active Learning team.
Latest publications
InfoNCE: Identifying the Gap Between Theory and Practice, AISTATS 2025
Evgenia Rusak*, Patrik Reizinger*, Attila Juhos*, Oliver Bringmann, Roland S. Zimmermann, Wieland Brendel
We generalize previous identifiability results for contrastive learning toward anisotropic latents that better capture the effect of augmentations used in practical applications, thereby reducing the gap between theory and practice.
In Search of Forgotten Domain Generalization, Spotlight at ICLR 2025
Prasanna Mayilvahanan*, Roland S. Zimmermann*, Thaddäus Wiedemer, Evgenia Rusak, Attila Juhos, Matthias Bethge, Wieland Brendel
CLIP’s high performance on style-centric domain shifts is significantly influenced by the presence of such images in its training set.