UDC-VIT

Face recognition

UDC-VIT stands out from other datasets by featuring videos tailored for face recognition. Some datasets, such as T-OLED/P-OLED, SYNTH, and VidUDC33K, only include limited human representations, often too small or from unrecognizable angles for face recognition. Zhifeng et al. introduce still image datasets for face recognition. However, these datasets are generated using a GAN-based model trained on the P-OLED dataset, which does not adequately simulate realistic UDC degradation, notably the lack of flare. Additionally, these datasets are not publicly available. Conversely, UDC-VIT prominently features humans in 64.6% of its videos (approved by the Institutional Review Board (IRB)), featuring various motions (e.g., hand waving, thumbs-up, body-swaying, and walking) by 22 carefully selected subjects from different angles. Users of the UDC-VIT dataset must secure IRB approval per their country’s laws and use it solely for research.

UDC-VIT (GT): hand waving UDC-VIT (Input): hand waving
UDC-VIT (GT): thumbs-up UDC-VIT (Input): thumbs-up
UDC-VIT (GT): body-swaying UDC-VIT (Input): body-swaying
UDC-VIT (GT): walking UDC-VIT (Input): walking