The first Workshop on Eye Tracking for Quality of Experience in Multimedia (ET-MM), which was organized by Dietmar Saupe (University of Konstanz), Lewis Chuang (LMU Munich) and Hantao Liu (University of Cardiff) took place on June 2nd, 2020. Originally, the event was intended to be co-located with the ACM Symposium on Eye Tracking Research & Applications (ETRA) in Stuttgart, but the Corona crisis forced organizers and participants resort to a virtual venue.

Thomas Wallis, senior research scientist at Amazon Tübingen held an opening keynote on the prediction of eye movements in images and videos. This problem is also known as saliency prediction, and contested in the research community within the scope of the MIT/Tübingen saliency benchmark since years. Wallis presented not only recent deep learning approaches for images, but also discussed architectures for gaze path prediction on videos, which adds complexity through the temporal component. He showcased challenges and difficulties imposed by disagreements between ground-truth gaze paths and model predictions. Ultimately, the topic drifted in the direction of benchmark generation.

The contributed talks were presented in two sessions:
The first one contained presentations by Sharatah C. Koorathota (Dept. of Biomedical Engineering, Columbia University), who started with a presentation on “Sequence Models in Eye Tracking: Predicting Pupil Diameter During Learning”, followed by a video contribution by Jung-Hwa Kim (Kumoh National Institute of Technology, Korea) on “Gaze Estimation in the Dark with Generative Adversarial Networks” and David Bethge (Porsche / LMU Munich) on “Analyzing Transferability of Happiness Detection via Gaze Tracking in Multimedia Applications”.

The second part was more application focused: Starting with “Gaze Data for Quality Assessment of Foveated Video” by Oliver Wiedemann (University of Konstanz), the topic shifted “Toward a Gaze-Enabled Assistance System” by Kenan Bektas (University of St. Gallen). The closing presentation was given by Sylvia Rothe (LMU Munich) on “Implications of Eye Tracking Research to Cinematic Virtual Reality”.

Between sessions, participants were split up in smaller break-out rooms for discussions and networking. Overall, the event was sold out with 130 registered participants, of which around 80 were online in the Zoom session at most times. The workshop probably reached far more people than would have been expected for a face-to-face event in Stuttgart.

All in all a surprisingly pleasant virtual workshop experience in these trying times that render real-life meetings impossible. The many current, often work-in-progress contributions and lots of opportunities for networking and socializing might have sparked new ideas or even started new collaborations, of which we will hopefully hear more during future SFB-TRR 161 events.

ET-MM Workshop Online via Zoom

Oliver Wiedemann is a PhD Student in the Multimedia Signal Processing Group at the University of Konstanz. His work within project A05 (Visual Quality Assessment) of the SFB-TRR161 focuses mostly on local, machine-learning based quality assessment and adaptive video coding.

Tagged on:

Leave a Reply

Your email address will not be published.