首页 | 官方网站   微博 | 高级检索  
     


Multi‐Modal Perception for Selective Rendering
Authors:Carlo Harvey  Kurt Debattista  Thomas Bashford‐Rogers  Alan Chalmers
Affiliation:WMG, University of Warwick, Coventry, UK
Abstract:A major challenge in generating high‐fidelity virtual environments (VEs) is to be able to provide realism at interactive rates. The high‐fidelity simulation of light and sound is still unachievable in real time as such physical accuracy is very computationally demanding. Only recently has visual perception been used in high‐fidelity rendering to improve performance by a series of novel exploitations; to render parts of the scene that are not currently being attended to by the viewer at a much lower quality without the difference being perceived. This paper investigates the effect spatialized directional sound has on the visual attention of a user towards rendered images. These perceptual artefacts are utilized in selective rendering pipelines via the use of multi‐modal maps. The multi‐modal maps are tested through psychophysical experiments to examine their applicability to selective rendering algorithms, with a series of fixed cost rendering functions, and are found to perform significantly better than only using image saliency maps that are naively applied to multi‐modal VEs.
Keywords:multi‐modal  cross‐modal  saliency  sound  graphics  selective rendering  I.3.3 [Computer Graphics]: Picture/Image Generation—  Viewing Algorithms  I.4.8 [Computer Graphics]: Image Processing and Computer Vision—  Scene Analysis ‐ Object Recognition  I.4.8 [Computer Graphics]: Image Processing and Computer Vision—  Scene Analysis ‐ Tracking
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号