UDC-VIT


UDC-VIT: A Real-World Video Dataset for Under-Display Cameras
Authors: Kyusu Ahn, JiSoo Kim, Sangik Lee, HyunGyu Lee, Byeonghyun Ko, Chanwoo Park, and Jaejin Lee.

The UDC-VIT dataset captures the movements of various subjects such as people and objects in
both indoor and outdoor environments, capturing real-UDC degradations.

Where can I find the paper related to this project?

Click here to download our paper

or

here to view the html version.

What is Under-Display Cameras?

Under-Display Camera (UDC) is an advanced imaging system that places a digital camera lens underneath a display panel, effectively concealing the camera. However, the display panel significantly degrades captured images or videos, introducing low transmittance, blur, noise, and flare issues. Tackling such issues is challenging because of the complex degradation of UDCs, including diverse flare patterns.

UDC Image

Why make this?

Despite extensive research on UDC images and their restoration models, studies on videos have yet to be significantly explored. While two UDC video datasets have been proposed, they primarily focus on unrealistic or synthetic UDC degradation rather than real-world UDC videos. In this paper, we propose a real-world UDC video dataset called UDC-VIT. Unlike previous datasets, only UDC-VIT exclusively includes human motions that target facial recognition.


What is UDC-VIT?

The UDC-VIT dataset is a collection of well-aligned paired videos designed to address the challenges of obtaining paired video datasets in UDC settings, which often suffer from multiple degradations. To overcome these challenges, we have meticulously designed both hardware and software to ensure synchronized videos with precise alignment accuracy.

The UDC-VIT dataset exclusively captures real-world degradations such as noise, blur, decreased transmittance, and flare. Each frame in the UDC video dataset is carefully curated to depict UDC's unique flare characteristics, including spatially variant flares, light source variant flares, and temporally variant flares. This precise depiction is a distinguishing feature of UDC-VIT.

Moreover, UDC-VIT stands out from other datasets by featuring videos tailored to face recognition. The dataset is meticulously captured with Institutional Review Board (IRB) approval, involving 22 carefully selected subjects performing various motions such as hand waving, thumbs-up, body-swaying, and walking. These videos are recorded from different angles to enhance the dataset's robustness and applicability in face recognition tasks, ensuring its reliability and quality.


How can I download the dataset?

UDC-VIT users can download the datasets through our research group's website, which will be soon publicly available. Before downloading the UDC-VIT dataset, please carefully read the user guidelines and the license.

How can I download the codes including the benchmark models?

Download the codes and pretrained benchmark models from our GitHub repository..

The dataset's annotation and distribution

The dataset's annotation and distribution are as follows. The parenthesis beside a label indicates its encoding. Note that a video pair can have multiple annotation labels.

UDC-VIT Statistics