For SIGGRAPH Asia 2016 more than six thousand people from all over the world came to Macao. For four days, the attendees of the largest annual conference in computer graphics and interactive techniques in Asia, exchanged their latest and current results in research, projects and developments in various related areas. Besides technical parts, the broad program included an emerging technologies exhibition, workshops, poster sessions, a computer animation festival, VR showcase, art gallery and symposia on education, mobile grahics and visualization.
Keynote by Paul Debevec
Paul Debevec, a senior staff engineer at Google VR, gave a remarkable talk as a keynote of the 2016 SIGGRAPH Asia. His topic was the recent developments of rendering photo-realistic animated human faces. Fifteen years ago, many considered computer generated human faces, such as in the movie “Final Fantasy: The Spirits Within”, to look strangely synthetic. Today however, we have surpassed the “Uncanny Valley” and are able to completely synthesize faces in almost arbitrary scenes in movies, without the audience noticing.
With the help of the Light Stage scanning system, Debevec co-developed at UC Berkeley and USC ICT, they were able to help in creating digital actors in recent films. As examples Debevec talked about the Digital Emily Project where actress Emily O’Brien’s face was photographed under different light conditions and completely digitized. The results were so convincing, that the actress herself could not tell the difference between the animation and the real one. The speaker also talked about more recent projects, such as the rendering of the Paul Walker in the movie “Fast & Furious 7”, who was killed in a car accident before the shooting finished. With the help of the Light Stage and Walkers brothers, it was possible to complete the movie. Other interesting projects Paul Debevec mentioned include real-time digital actors in 3D games, a digital version of President Obama for archiving purposes and light field video recordings of interviews with survivors of the Holocaust. The last project renders the possibility of interactive conversations with life-size projections of the survivors in the future. In the context of the recent debate about fake news sites, Debevec concluded his talk with a reminder of the responsibility that comes with using this kind of technology for creation and the new possibilities that come with it.
Symposium on Visualization
The Symposium on Visualization, organized by Wei Chen and Daniel Weiskopf, gave researchers from around the world the opportunity to present their work on cutting-edge visualization techniques. In five sessions, all important areas of visualization research where covered.
As part of the volume rendering session, I presented our research on Real-time performance prediction and tuning for interactive volume raycasting. In this work, we use machine learning and analytical modeling to predict the execution time of upcoming frames in a volume rendering application and use this information to achieve constant frame rates with high rendering quality. This can be crucial for smooth user interactions and possible future applications such as VR and load balancing in distributed environments.