Code underlying the publication: Zero-Shot Day-Night Domain Adaptation with a Physics Prior

doi: 10.4121/47c143d4-0692-42b8-bcea-0ca2c1ee5056.v1
The doi above is for this specific version of this dataset, which is currently the latest. Newer versions may be published in the future. For a link that will always point to the latest version, please use
doi: 10.4121/47c143d4-0692-42b8-bcea-0ca2c1ee5056
Datacite citation style:
Attila Lengyel; Garg, Sourav; van Gemert, Jan; Milford, Michael (2023): Code underlying the publication: Zero-Shot Day-Night Domain Adaptation with a Physics Prior. Version 1. 4TU.ResearchData. software.
Other citation styles (APA, Harvard, MLA, Vancouver, Chicago, IEEE) available at Datacite

Code corresponding to ICCV 2021 submission "Zero-Shot Day-Night Domain Adaptation with a Physics Prior".


We explore the zero-shot setting for day-night domain adaptation. The traditional domain adaptation setting is to train on one domain and adapt to the target domain by exploiting unlabeled data samples from the test set. As gathering relevant test data is expensive and sometimes even impossible, we remove any reliance on test data imagery and instead exploit a visual inductive prior derived from physics-based reflection models for domain adaptation. We cast a number of color invariant edge detectors as trainable layers in a convolutional neural network and evaluate their robustness to illumination changes. We show that the color invariant layer reduces the day-night distribution shift in feature map activations throughout the network. We demonstrate improved performance for zero-shot day to night domain adaptation on both synthetic as well as natural datasets in various tasks, including classification, segmentation and place recognition.

  • 2023-11-29 first online, published, posted
associated peer-reviewed publication
Zero-Shot Day-Night Domain Adaptation with a Physics Prior
  • Tabula Inscripta: Prior knowledge for deep learning (grant code VI.Vidi.192.100) [more info...] Dutch Research Council
TU Delft, TU Delft, Faculty of Electrical Engineering, Mathematics and Computer Science, Department of Intelligent Systems, Computer Vision Lab
QUT Centre for Robotics, Queensland University of Technology (QUT), Brisbane, Australia


To access the source code, use the following command:

git clone