Inferring the 3D shape of objects shown in images is usually an easy task for a human. To solve it, our visual system simultaneously exploits a variety of monocular depth cues, such as lighting, shading, the relative size of objects or perspective effects. Perceiving the real world with two eyes even allows us to take advantage of another valuable depth cue, the so called binocular parallax. Because of the slightly different viewing position, the images projected to the retinas of both eyes will be slightly different. While objects close to the observer undergo a large displacement between the images, objects that are far away exhibit a small displacement. Because nearly all this happens unconsciously, we usually do not realize how tough this problem really is.
Humans rely on eye sight and the processing of the resulting information in more everyday tasks than we realize. We are able to solve moderately difficult quadratic formulas in our head when taking information about a flying ball and aiming to hit it with a baseball bet at incredible speeds. We are able to use visual information about dozens of cars to navigate when driving a car in unknown streets. Our eyes calibrate to the lighting conditions allowing us to navigate broad daylight just as well as dimly lit rooms. Beyond that, we can use information about depth of field, color, tint, and sharpness. In fact, it is often said that over 50% of the cortex, the surface of the brain, is involved in vision processing tasks. This makes vision one of the most relied upon sense. Consequently, understanding what drives our eye movements may be a key to understanding how the brain as a whole works.
As I have been interested in computers since my childhood and spend some of my free time with programming, I decided to attend an internship in the field of computer science. The choice fell to the Visualization Research Center (VISUS), as I hoped to gain as much experience as possible in the three areas of work, research and student life.
Spatial Memory is an essential part of our everyday life: There is no need for a map to find the way to our best friend’s home, we know where to find milk cartons in our preferred supermarket, and most of the time we remember where we placed the remote control of our TV. In a similar way, spatial memory and technology can be also combined: The desktop of our laptop represents a physical desktop and like in a physical environment, documents and tools can be placed at different positions. Navigating to them is easy when done regularly.
With increasing realism of computer graphics and virtual worlds and digital characters look more and more natural. However, the effect of Uncanny Valley, first described in 1970, prevents too realistic human characters from being accepted. In my Ph.D. thesis, I investigate how the Uncanny Valley affects the user experience in virtual environments and virtual reality and how the effect can be avoided.
Visualizations represent a means to communicate data and analysis results. Our research at the Chair for Data Analysis and Visualization is driven by real-world problems and intends to bring the human capabilities and perception together with computer algorithms, using visualization. Thereby, we face the key challenge of how to visually communicate data to the human. A common assumption of visualization researchers is: the more abstract a representation is, the harder it is to interpret for the human, in particular if not trained in reading visualizations.
In cooperation with the “GI-Fachgruppe Be-Greifbare Interaktion”, the HCI group at the University of Stuttgart organized the annual Inventors-workshop with the topic: Using Physiological Sensing for Embodied Interaction. In the workshop, we introduced the basic concepts for sensing of human muscle activity accompanied by a refreshing Keynote from Leonardo Gizzi. We provided a basic explanation of how physiological sensing works, introduced how it can be technically realized, and showed different applications and usage scenarios.
Together with the Human-Computer-Interaction Group of the University of Stuttgart, the SFB-TRR 161 organized a Winter School in February at Söllerhaus (Kleinwalsertal, Austria). During this three days seminar several visual computing scientists from the University of Stuttgart and the University of Konstanz could intensify their scientific cooperation, exchange their knowledge and talk about their new findings in their projects work. All of the PhD students gave talks and did some demonstrations.
During this winter, I spent the last three months at the Data Analysis and Visualization Group led by Prof. Dr. Daniel Keim at the University of Konstanz. During this stay I had the opportunity to meet many researchers who work in visualization and visual analytics in multiple domains and pursue my research work.
Since October 2015 I have been in contact with Prof. Oliver Deussen, who since 2010 has been developing the e-David, a robotic Drawing Apparatus for Vivid Interactive Display, at the University of Konstanz. Subsequent to the first encounter at the Massachusetts Institute of Technology (MIT), I visited Prof. Deussen and his team at their lab in Konstanz, to continue to discuss and re-evaluate the potential use of the robot from an artistic and creative perspective.