Die Dozenten der Informatik-Institute der Technischen Universität Braunschweig laden im Rahmen des Informatik-Kolloquiums zu folgendem Vortrag ein:
Prof. Dr. Michael Guthe, Universität Marburg: Perception in Real-Time Rendering
Beginn: 24.08.2011, 14:00 Uhr Ort: TU Braunschweig, Informatikzentrum, Mühlenpfordtstraße 23, Galeriegeschoss, Raum G30 Webseite: http://www.ibr.cs.tu-bs.de/cal/kolloq/2011-08-24-guthe.html Kontakt: Prof. Dr.-Ing. Marcus A. Magnor
Current graphics hardware is able to render scenes of striking realism in real-time: the ever growing processing power, memory size and bandwidth allows for the rendering of global illumination, realistic materials and smooth animations reserved to offline-rendering a few years ago.
Nevertheless, the consumers’ expectations are almost increasing at the same rate. While traditional approaches become more efficient due to increasing processing power, the ultimate goal of realistically looking renderings is not a purely mathematical one. Due to the limitations of the human visual system images that are far from realistic in a physical sense still look real. On the other hand seemingly minor inaccuracies can cause highly visible differences. Therefore it is necessary to consider human vision when generating images for both offline- and real-time rendering.
Unfortunately estimating the visual difference itself can often be more time consuming than image generation. Therefore special visual models and pre-computed visual difference need to be used for interactive real-time rendering. The talk introduces two such models that were successfully applied in this context. The first one is tailored to perception of complex material where especially compression to a manageable size is especially important. The second one proposes an efficient pre-computed difference measure for the reduction of complex polygon models. Based on the material with which the model is rendered a visually optimized reduction is performed. Finally an outlook to other fields that benefit from perception models is given.