首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 38 毫秒
1.
Distributed 3D virtual environments can help researchers conduct experiments globally with remotely located participants. We discuss challenges and opportunities for the shared work environment. Our world is entering an age where our current understanding of telecommunications and graphics computing is constantly challenged. With the availability of global information highways, 3D graphical intercontinental collaboration will become a part of our daily work routine. Already, world class auto makers are attempting to reduce car development time to two-year cycles, enlisting global engineering teams. However, this process requires new tools such as shared 3D CAD and distributed product data management systems. The Fraunhofer Center for Research in Computer Graphics (CRCG) in the United States and the Fraunhofer Institute for Computer Graphics (IGD) in Germany, are looking ahead into this new age by establishing a transcontinental computer graphics research effort and a proposed G-7 testbed. We are studying how 3D computer graphics and virtual environments can aid global collaborative work. We have focused our research efforts on determining how computer networks can transform the distributed workplace into a shared environment, allowing real time interaction among people and processes without regard to their location  相似文献   

2.
This paper gives an end-to-end overview of 3D video and free viewpoint video, which can be regarded as advanced functionalities that expand the capabilities of a 2D video. Free viewpoint video can be understood as the functionality to freely navigate within real world visual scenes, as it is known for instance from virtual worlds in computer graphics. 3D video shall be understood as the functionality that provides the user with a 3D depth impression of the observed scene, which is also known as stereo video. In that sense as functionalities, 3D video and free viewpoint video are not mutually exclusive but can very well be combined in a single system. Research in this area combines computer graphics, computer vision and visual communications. It spans the whole media processing chain from capture to display and the design of systems has to take all parts into account, which is outlined in different sections of this paper giving an end-to-end view and mapping of this broad area. The conclusion is that the necessary technology including standard media formats for 3D video and free viewpoint video is available or will be available in the future, and that there is a clear demand from industry and user for such advanced types of visual media. As a consequence we are witnessing these days how such technology enters our everyday life  相似文献   

3.
Although real guardian angels aren't easy to get hold of, some of the computer technology needed for such a personal assistant is already available. Other parts exist in the form of research prototypes, but some technological breakthroughs are necessary before we can realize their potential, let alone integrate into our daily routines. Future VR and AR interfaces won't necessarily try to provide a perfect imitation of reality but instead will adapt their display mechanisms to their users' individual requirements. The emergence of these interfaces won't rely on a single technology but will depend on the advances in many areas, including computer graphics, display technology, tracking and recognition devices, natural and intuitive interactions, 3D interaction techniques, mobile and ubiquitous computing, intelligent agents, and conversational user interfaces, to name a few. The guardian angel scenario exemplifies how future developments in AR and VR user interfaces might change the way we interact with computers. Although this example is just one of several plausible scenarios, it demonstrates that AR and VP, in combination with user-centered design of their post-WIMP interfaces, can provide increased access, convenience, usability, and efficiency  相似文献   

4.
Real reality     
In its most basic form, computer graphics technology renders an image of the world from a model. Having refined techniques from vector graphics, computer graphics now includes improved methods to render realistic and informative visual images of models representing microcosms of interest. Computational technology includes mechanisms to compress, communicate and combine text, audio, graphics and video to provide a unified multimedia document. Users can now decide if they want to read a story, watch a video or combine information from multiple sources to create a personalized digital experience. So what future faces computer graphics and multimedia? Can we take the technology to still another level of reality? Let's assume we can expand the scope of computer graphics to produce and render a world model for information or entertainment that surpasses the visual, also representing sound, touch, smell and taste. Although seemingly far out now, I believe computer graphics and multimedia will combine and expand to make such a technology-real reality-possible. Virtual reality (VR) foreshadows this development. I believe real reality, or what may more accurately be called remote reality, lies just around the corner. Real reality will revolutionize our society in many ways. Unlike VR, real reality systems will let users experience and interact digitally with real environments using all the human senses. However, real reality experiences will remain free of time and space constraints. You will be able to experience a remote environment digitally at your convenience wherever you are  相似文献   

5.
Desktop virtualization, which is to make the desktop virtual so that users can access any application through the network with any devices at any time and any place, is being widely used now as an emerging trend. However, it is important to further improve this approach while maintaining good user experience. Though the simple protocol for independent computing environments as a virtual desktop solution can achieve a user experience similar to an interaction with a local machine, there are still many deficiencies in it. For instance, it cannot apply to environment of high controllability of user, and the quality of graphic interactive experience is to be improved. In this paper, to meet QoE requirements of user, we build a feasible file transfer and sharing mechanism and propose a graphics subsystem optimization strategy based on simple protocol for independent computing environments (SPICE), namely Transparent Desktop. We also verify that the Transparent Desktop can provide users with ubiquitous desktop services of higher efficiency, stronger user‐controllability and better QoE through the experimental. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

6.
McDonald  D.W. 《Computer》2003,36(10):111-112
In many popular visions of ubiquitous computing, the environment proactively responds to individuals who inhabit the space. For example, a display magically presents a personalized advertisement, the most relevant video feed, or the desired page from a secret government document. Such capability requires more than an abundance of networked displays, devices, and sensors; it relies implicitly on recommendation systems that either directly serve the end user or provide critical services to some other application. As recommendation systems evolve to exploit new advances in ubiquitous computing technology, researchers and practitioners from technical and social science disciplines must collaborate to address the challenges to their effective implementation. Although it may be impossible to perfectly anticipate each individual's needs at any place or time, ubiquitous computing will enable such systems to help people cope with an expanding array of choices.  相似文献   

7.
We assume that in the future any user's display platform can render fantastically complex scenes. Having finally shed the concerns related to the computer graphics medium, developers will concentrate on the message. Content will be key-no longer will users accept nonsensical, artistically vacant environments simply because they're presented in a head-mounted display. This will also mean that static worlds, no matter how aesthetically pleasing, will come second to environments offering interactive content. The development and provision of dynamic content lie at the heart of the problem we face. For an environment to attract significant and regular participation, it must react in an intelligent and unpredictable fashion. Today, that intelligence can come from only two sources: live human collaboration and computer-generated autonomy. Collaborative VE research combines graphics, networking, human perception, and distributed computing issues. However, these facets betray a disappointing lack of coordination. Computer-generated autonomy (CGA) will certainly become inextricably melded with computer graphics. While this article focuses on other aspects of CVEs, the National Research Council's report on Modeling and Simulation provides excellent recommendations for future avenues of research in CGA, such as behavior adaptability and human representation. Many of the infrastructure requirements for CGA-enhanced systems with a large number of synthetic actors are the same as those needed for large-scale CVEs  相似文献   

8.
From visual simulation to virtual reality to games   总被引:3,自引:0,他引:3  
Zyda  M. 《Computer》2005,38(9):25-32
During the past decades, the virtual reality community has based its development on a synthesis of earlier work in interactive 3D graphics, user interfaces, and visual simulation. Currently, the VR field is transitioning into work influenced by video games. Because much of the research and development being conducted in the games community parallels the VR community's efforts, it has the potential to affect a greater audience. Given these trends, VR researchers who want their work to remain relevant must realign to focus on game research and development. Leveraging technology from the visual simulation and virtual reality communities, serious games provide a delivery system for organizational video game instruction and training.  相似文献   

9.
Distributed multimedia systems typically involve a sophisticated user interaction. Further, objects are allocated on physically distributed computing systems, and multimedia data must be transferred across heterogeneous networks in a timely manner. These systems often have complex requirements on a user interaction, quality of service and temporal order among media streams. The design and implementation of these requirements are inherently complex and present an extraordinary design and programming challenge. Generally, these complex requirements cannot be adequately captured using a single model or a design notation. The challenge amounts to (i) identification of multiple, often orthogonal models, each capturing a specific aspect of the requirements, and (ii) provision of an authorware that supports the composition of these models. In this paper, we propose to capture the multimedia requirements in three different models: configuration, user control and presentation, and demonstrate how the composition of these models can be supported by an authorware using the Java and CORBA technologies. The concepts are illustrated using a real-life example based on a virtual city tour application that features distributed controls, collaborative work and multimedia presentations. Various distributed multimedia applications like video phone, video conferencing and distributed presentation have been successfully constructed using the proposed multiple models and authorware. The results are encouraging and the approach can shorten the development of multimedia applications considerably.  相似文献   

10.
11.
Automated virtual camera control has been widely used in animation and interactive virtual environments. We have developed a multiple sparse camera based free view video system prototype that allows users to control the position and orientation of a virtual camera, enabling the observation of a real scene in three dimensions (3D) from any desired viewpoint. Automatic camera control can be activated to follow selected objects by the user. Our method combines a simple geometric model of the scene composed of planes (virtual environment), augmented with visual information from the cameras and pre-computed tracking information of moving targets to generate novel perspective corrected 3D views of the virtual camera and moving objects. To achieve real-time rendering performance, view-dependent textured mapped billboards are used to render the moving objects at their correct locations and foreground masks are used to remove the moving objects from the projected video streams. The current prototype runs on a PC with a common graphics card and can generate virtual 2D views from three cameras of resolution 768×576 with several moving objects at about 11 fps.  相似文献   

12.
If ubiquitous computing (ubicomp) is to enhance physical environments then early and accurate assessment of alternative solutions will be necessary to avoid costly deployment of systems that fail to meet requirements. This paper presents APEX, a prototyping framework that combines a 3D Application Server with a behaviour modeling tool. The contribution of this framework is that it allows exhaustive analysis of the behaviour models that drive the prototype while at the same time enabling immersive exploration of a virtual environment simulating the proposed system. The development of prototypes is supported through three layers: a simulation layer (using OpenSimulator); a modelling layer (using CPN Tools) and a physical layer (using external devices and real users). APEX allows movement between these layers to analyse different features, from user experience to user behaviour. The multi layer approach makes it possible to express user behaviour in the modelling layer, provides a way to reduce the number of real users needed by adding simulated avatars, and supports user testing of hybrids of virtual and real components as well as exhaustive analysis. This paper demonstrates the approach by means of an example, placing particular emphasis on the simulation of virtual environments, low cost prototyping and the formal analysis capabilities.  相似文献   

13.
Recently ubiquitous technology has invaded almost every aspect of the modern life. Several application domains, have integrated ubiquitous technology to make the management of resources a dynamic task. However, the need for adequate and enforced authentication and access control models to provide safe access to sensitive information remains a critical matter to address in such environments. Many security models were proposed in the literature thus few were able to provide adaptive access decisions based on the environmental changes. In this paper, we propose an approach based on our previous work [B.A. Bouna, R. Chbeir, S. Marrara, A multimedia access control language for virtual and ambient intelligence environments, In Secure Web Services (2007) 111–120] to enforce current role based access control models [M.J. Moyer, M. Ahama, Generalized role-based access control, in: Proceedings of International Conference on Distributed Computing Systems (ICDCS), Phoenix, Arizona, USA, 2001, pp. 391–398] using multimedia objects in a dynamic environment. In essence, multimedia objects tend to be complex, memory and time consuming nevertheless they provide interesting information about users and their context (user surrounding, his moves and gesture, people nearby, etc.). The idea behind our approach is to attribute to roles and permissions, multimedia signatures in which we integrate conditions based on users’ context information described using multimedia objects in order to limit role activation and the abuse of permissions in a given environment. We also describe our architecture which extends the known XACML [XACML, XACML Profile for Role Based Access Control (RBAC), <http://docs.oasis-open.org/xacml/cd-xacml-rbac-profile-01.pdf>, 2008] terminology to incorporate multimedia signatures. We provide an overview of a possible implementation of the model to illustrate how it could be valuable once integrated in an intelligent environment.  相似文献   

14.
Ubiquitous recommender systems combine characteristics from ubiquitous systems and recommender systems in order to provide personalized recommendations to users in ubiquitous environments. Although not a new research area, ubiquitous recommender systems research has not yet been reviewed and classified in terms of ubiquitous research and recommender systems research, in order to deeply comprehend its nature, characteristics, relevant issues and challenges. It is our belief that ubiquitous recommenders can nowadays take advantage of the progress mobile phone technology has made in identifying items around, as well as utilize the faster wireless connections and the endless capabilities of modern mobile devices in order to provide users with more personalized and context-aware recommendations on location to aid them with their task at hand. This work focuses on ubiquitous recommender systems, while a brief analysis of the two fundamental areas from which they emerged, ubiquitous computing and recommender systems research is also conducted. Related work is provided, followed by a classification schema and a discussion about the correlation of ubiquitous recommenders with classic ubiquitous systems and recommender systems: similarities inevitably exist, however their fundamental differences are crucial. The paper concludes by proposing UbiCARS: a new class of ubiquitous recommender systems that will combine characteristics from ubiquitous systems and context-aware recommender systems in order to utilize multidimensional context modeling techniques not previously met in ubiquitous recommender systems.  相似文献   

15.
Wearable computers provide constant access to computing and communications resources; however, there are many unanswered questions as to how this computing power can be used to enhance communication. We describe a wearable augmented reality communication space that uses spatialised 3D graphics and audio cues to aid communication. The user is surrounded by virtual avatars of the remote collaborators that they can interact with using natural head and body motions. The use of spatial cues means that the conferencing space can potentially support dozens of simultaneous users. We report on two experiments that show users can understand speakers better with spatial rather than non-spatial audio, and that minimal visual cues may be sufficient to distinguish between speakers. Additional informal user studies with real conference participants suggest that wearable communication spaces may offer significant advantages over traditional communication devices.  相似文献   

16.
Distributed virtual environments   总被引:5,自引:0,他引:5  
  相似文献   

17.
In the future, video-streaming systems will have to support adaptation over an extremely large range of display requirements (e.g., 90×60 to 1920×1080). This paper presents the architectural trade-offs of bandwidth efficiency, computational cost, and storage cost to support fine-grained multiresolution video over a large set of resolutions. While several techniques have been proposed, they have focused mainly on limited spatial resolution adaptation. In this paper, we examine the ability of current techniques to support wide-range spatial resolution adaptation. Based upon experiments with real video, we propose an architecture that can support wide-range adaptation efficiently. Our results indicate that multiple encodings with limited spatial adaptation from each encoding provide good trade-offs between efficient coding and the ability to adapt the stream to various resolutions. Jie Huang received her BS in computer and communications and MS in computer science from Beijing University of Posts and Telecommunications, Beijing, China, in 1992 and 1995 respectively, where she was an assistant professor from 1995 to 1999. Since 1999, she has been pursuing her PhD at OGI school of Science and Engineering at Oregon Health and Science University (from 1999 to 2004) and Portland State University (since 2004). Her research interests include multimedia networking and software engineering. Wu-chi Feng received his Ph.D. in Computer Science and Engineering from the University of Michigan in 1996. ~His research interests include multimedia systems, video-based sensor networking technologies, and networking. ~He currently serves as an Editor for the Springer-ACM Multimedia Systems Journal. ~He also serves on the national Orion Cyberinfrastructure Advisory committee. Jonathan Walpole received his Ph.D. degree in Computer Science from Lancaster University, UK. He is a Professor in the Computer Science Department at Portland State University. Prior to joining PSU he was a Professor and Director of the Systems Software Laboratory at the OGI School of Science and Engineering at Oregon Health & Science University. His research interests are in operating systems, networking, distributed systems and multimedia computing. He has pioneered research in adaptive resource management and the integration of application and system-level quality of service management. He has also done leading edge research on dynamic specialization for enhanced performance, survivability and evolvability of large software systems. His research on distributed multimedia systems began in 1988, and in the early 1990s he lead the development of one of the first QoS-adaptive Internet streaming video players.  相似文献   

18.
张亚勤 《计算机学报》2000,23(9):897-897
We have witnessed an increasing convergence of digital video,computer graphics and networkingtechnologies in the last a few years.The future multimedia becomes as vivid and realistic as digitalvideo,as structured and interactive as computer graphics,and most importantly becomes ubiquitousand fully networked that enables compelling content access to anywhere,anyone,anytime,anydevice,and in whatever forms.There are,however,multiple challenges:1 .Multimedia information is huge2 .Internet currentl…  相似文献   

19.
Desktop virtual reality is an emerging educational technology that offers many potential benefits for learners in online learning contexts; however, a limited body of research is available that connects current multimedia learning techniques with these new forms of media. Because most formal online learning is delivered using learning management systems, it is important to consider how to best integrate the visually complex and highly concrete desktop virtual reality into more text-driven and abstract environments such as those found in learning managements systems. This review of literature examines recent signaling literature within the context of multimedia learning and hypermedia learning. Signaling is a technique that involves using cues to emphasize important information in materials (Mayer, 2009, pp. 108–117). The analysis concluded that the depth and breadth of signaling literature is severely lacking. While certain related bodies of literature can be used to inform signaling research in desktop virtual reality and online learning management systems, no studies were found that directly address these topics. This article makes several important contributions to the body of signaling literature. First, based on what is known through literature, this article is a first attempt at examining signaling as a technique for integrating desktop virtual reality with online learning management systems. Second, this analysis resolves an important gap in literature by differentiating between signaling and cueing. Third, this article provides a survey of recent signaling-related literature and identifies specific areas that inform future work with desktop virtual reality delivered using online learning management systems. Finally, a taxonomy for classifying multimedia and hypermedia is presented as a tool for more effectively describing interventions used in signaling research.  相似文献   

20.
Huber  J.F. 《Computer》2002,35(10):100-102
The promise of ubiquitous computing is a future in which highly specialized, embedded computing devices operate seamlessly within the everyday environment and are transparent to users. Realizing this vision will require next-generation networks to support mobile multimedia devices with capabilities well beyond those of today's handsets. These networks will exploit wideband radio access technologies and IP-based protocols to provide IP transparency-all network elements support IP; mobility management for a globally networked environment; unique addressing for every user; personalization of information; positioning to enable location-dependent services; and end-to-end security. Such functionality requires more than providing wireless Internet access and e-mail.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号