Inferring the 3D shape of objects shown in images is usually an easy task for a human. To solve it, our visual system simultaneously exploits a variety of monocular depth cues, such as lighting, shading, the relative size of objects or perspective effects. Perceiving the real world with two eyes even allows us to take advantage of another valuable depth cue, the so called binocular parallax. Because of the slightly different viewing position, the images projected to the retinas of both eyes will be slightly different. While objects close to the observer undergo a large displacement between the images, objects that are far away exhibit a small displacement. Because nearly all this happens unconsciously, we usually do not realize how tough this problem really is.
Spatial Memory is an essential part of our everyday life: There is no need for a map to find the way to our best friend’s home, we know where to find milk cartons in our preferred supermarket, and most of the time we remember where we placed the remote control of our TV. In a similar way, spatial memory and technology can be also combined: The desktop of our laptop represents a physical desktop and like in a physical environment, documents and tools can be placed at different positions. Navigating to them is easy when done regularly.
With increasing realism of computer graphics and virtual worlds and digital characters look more and more natural. However, the effect of Uncanny Valley, first described in 1970, prevents too realistic human characters from being accepted. In my Ph.D. thesis, I investigate how the Uncanny Valley affects the user experience in virtual environments and virtual reality and how the effect can be avoided.
Visualizations represent a means to communicate data and analysis results. Our research at the Chair for Data Analysis and Visualization is driven by real-world problems and intends to bring the human capabilities and perception together with computer algorithms, using visualization. Thereby, we face the key challenge of how to visually communicate data to the human. A common assumption of visualization researchers is: the more abstract a representation is, the harder it is to interpret for the human, in particular if not trained in reading visualizations.
In cooperation with the “GI-Fachgruppe Be-Greifbare Interaktion”, the HCI group at the University of Stuttgart organized the annual Inventors-workshop with the topic: Using Physiological Sensing for Embodied Interaction. In the workshop, we introduced the basic concepts for sensing of human muscle activity accompanied by a refreshing Keynote from Leonardo Gizzi. We provided a basic explanation of how physiological sensing works, introduced how it can be technically realized, and showed different applications and usage scenarios.
Together with the Human-Computer-Interaction Group of the University of Stuttgart, the SFB-TRR 161 organized a Winter School in February at Söllerhaus (Kleinwalsertal, Austria). During this three days seminar several visual computing scientists from the University of Stuttgart and the University of Konstanz could intensify their scientific cooperation, exchange their knowledge and talk about their new findings in their projects work. All of the PhD students gave talks and did some demonstrations.
During this winter, I spent the last three months at the Data Analysis and Visualization Group led by Prof. Dr. Daniel Keim at the University of Konstanz. During this stay I had the opportunity to meet many researchers who work in visualization and visual analytics in multiple domains and pursue my research work.
Since October 2015 I have been in contact with Prof. Oliver Deussen, who since 2010 has been developing the e-David, a robotic Drawing Apparatus for Vivid Interactive Display, at the University of Konstanz. Subsequent to the first encounter at the Massachusetts Institute of Technology (MIT), I visited Prof. Deussen and his team at their lab in Konstanz, to continue to discuss and re-evaluate the potential use of the robot from an artistic and creative perspective.
Kuno Kurzhals is a visualization scientist at the Visualization Research Center of the University of Stuttgart (VISUS) with special focus on video visualization and evaluation methods in combination with eye tracking. His research is associated with the SFB-TRR 161 where scientists whant to establish quantification as a key ingredient of visual computing research. In this video interview he talks about the challenges and aims of his activities and explains some of his visualization results.
At the end of last month, Michael Klein from 7reasons, Vienna, visited the Visualization Research Center of the University of Stuttgart (VISUS). Within the Lecture Series “Visual Computing“ carried out by the Universities of Stuttgart and Konstanz within the research project SFB-TRR 161, he gave an enlightening talk about the application of computer graphics for cultural heritage preservations.