Workshop Organization

I have co-organized two workshops at different conferences:

1st Workshop on Test-Time Adaptation: Model, Adapt Thyself! (MAT), CVPR (2024)

In the MAT Workshop, we aim to unite researchers on adaptation and robustness to push the boundaries between training and testing. Our focus is on updating during deployment to maintain or improve accuracy, calibration, and fairness on changing data in diverse settings. Our scope encompasses data, evaluation, algorithms, and unresolved challenges for test-time updates while emphasizing unsupervised adaptation with minimal computational overhead. Special attention will be given to inventive approaches for adapting foundation models to new data, tasks, and deployments.

Shift happens: Crowdsourcing metrics and test datasets beyond ImageNet, ICML (2022)

In the ShiftHappens Workshop, we aimed to create a community-built benchmark suite for ImageNet models comprised of new datasets for OOD robustness and detection, as well as new tasks for existing OOD datasets. While the popularity of robustness benchmarks and new test datasets increased over the past years, the performance of computer vision models is still largely evaluated on ImageNet directly, or on simulated or isolated distribution shifts like in ImageNet-C. Therefore, we invited researchers to submit custom test sets which would cover novel and previously unseen distribution shifts.

Reviewing

Over the last few years, I have served as a reviewer for NeurIPS, ICML, ICLR, ICCV, ECCV, CVPR and TMLR.

Awards:

  • Expert Reviewer at TMLR (2023)
  • Outstanding Reviewer at CVPR (2023)
  • Highlighted Reviewer at ICLR (2022)
  • Top 10% of high-scoring reviewers at NeurIPS (2020)