首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
未来移动节点必须支持多个网络接口的应用。代理移动IPv6(PMIPv6)协议可以为移动节点提供基于网络的移动性管理,不需要移动节点参与移动性管理。分析了多接口技术在PMIPv6下的应用,详述了基于虚拟接口实现多接口接入PMIPv6的方法。在实验室集成开发环境下进行了实验测试,测试表明基于虚拟接口的PMIPv6多接口接入基本实现了多家乡和异构切换功能。  相似文献   

3.
A virtual human is an effective interface for interacting with users and plays an important role in carrying out certain tasks. As social networking sites are getting more and more popular, we propose a Facebook aware virtual human. The social networking sites are used to empower virtual humans for interpersonal conversational interaction in this paper. We combine Internet world, physical world and 3D virtual world together to create a new interface for users to interact with an autonomous virtual human which can behave like a real modern human. In order to take advantages of social networking sites, virtual human gathers information of a user from its profile, its likes, dislikes and gauge mood from most recent status update. In two user studies, we investigated whether and how this new interface can enhance human–virtual human interaction. Some positive results concluded from these studies will be guidelines on research and development of future virtual human interfaces.  相似文献   

4.
The growing interest in multimodal interface design is inspired in large part by the goals of supporting more transparent, flexible, efficient, and powerfully expressive means of human-computer interaction than in the past. Multimodal interfaces are expected to support a wider range of diverse applications, be usable by a broader spectrum of the average population, and function more reliably under realistic and challenging usage conditions. In this article, we summarize the emerging architectural approaches for interpreting speech and pen-based gestural input in a robust manner-including early and late fusion approaches, and the new hybrid symbolic-statistical approach. We also describe a diverse collection of state-of-the-art multimodal systems that process users' spoken and gestural input. These applications range from map-based and virtual reality systems for engaging in simulations and training, to field medic systems for mobile use in noisy environments, to web-based transactions and standard text-editing applications that will reshape daily computing and have a significant commercial impact. To realize successful multimodal systems of the future, many key research challenges remain to be addressed. Among these challenges are the development of cognitive theories to guide multimodal system design, and the development of effective natural language processing, dialogue processing, and error-handling techniques. In addition, new multimodal systems will be needed that can function more robustly and adaptively, and with support for collaborative multiperson use. Before this new class of systems can proliferate, toolkits also will be needed to promote software development for both simulated and functioning systems.  相似文献   

5.
Interaction techniques for interactive television (iTV) are currently complex and difficult to use for a wide-range of viewers. Few previous studies have dealt with the potential benefits of multimodal dialogue interaction in the context of iTV for the purpose of flexibility, usability, efficiency, and accessibility. This paper investigates the benefits of introducing speech and connected dialogue for iTV interaction, and presents a case study in which a prototype system was built allowing users to navigate the information space and control the operation of the TV by a speech-based natural language interface. The system was evaluated by analysing the user experience in five categories capturing essential aspects of iTV interaction: interaction style, information load, data access, effectiveness and initiative. Design considerations relevant for speech and dialogue information systems for TV interfaces also emerged from the analysis.  相似文献   

6.
7.
Ambient Assisted Living (AAL) systems must provide adapted services easily accessible by a wide variety of users. This can only be possible if the communication between the user and the system is carried out through an interface that is simple, rapid, effective, and robust. Natural language interfaces such as dialog systems fulfill these requisites, as they are based on a spoken conversation that resembles human communication. In this paper, we enhance systems interacting in AAL domains by means of incorporating context-aware conversational agents that consider the external context of the interaction and predict the user’s state. The user’s state is built on the basis of their emotional state and intention, and it is recognized by means of a module conceived as an intermediate phase between natural language understanding and dialog management in the architecture of the conversational agent. This prediction, carried out for each user turn in the dialog, makes it possible to adapt the system dynamically to the user’s needs. We have evaluated our proposal developing a context-aware system adapted to patients suffering from chronic pulmonary diseases, and provide a detailed discussion of the positive influence of our proposal in the success of the interaction, the information and services provided, as well as the perceived quality.  相似文献   

8.
张茜  张继荣 《微型机与应用》2012,31(5):11-13,16
为了说明BREW中如何实现用C语言来模拟C++中的面向对象的特性,实现接口的声明与实现的分离、对多个接口的支持和接口的易扩展性。通过实例,阐述了BREW通过虚拟函数表将接口与实现分离,使用ISHELL接口对多个接口支持及扩展。该方法与普通的C语言实现的接口相比较,修改接口而不会影响到应用程序,而接口有更好的可扩展性,更容易管理。  相似文献   

9.
《Artificial Intelligence》2007,171(8-9):568-585
Head pose and gesture offer several conversational grounding cues and are used extensively in face-to-face interaction among people. To accurately recognize visual feedback, humans often use contextual knowledge from previous and current events to anticipate when feedback is most likely to occur. In this paper we describe how contextual information can be used to predict visual feedback and improve recognition of head gestures in human–computer interfaces. Lexical, prosodic, timing, and gesture features can be used to predict a user's visual feedback during conversational dialog with a robotic or virtual agent. In non-conversational interfaces, context features based on user–interface system events can improve detection of head gestures for dialog box confirmation or document browsing. Our user study with prototype gesture-based components indicate quantitative and qualitative benefits of gesture-based confirmation over conventional alternatives. Using a discriminative approach to contextual prediction and multi-modal integration, performance of head gesture detection was improved with context features even when the topic of the test set was significantly different than the training set.  相似文献   

10.
The context of mobility raises many issues for geospatial applications providing location-based services. Mobile device limitations, such as small user interface footprint and pen input whilst in motion, result in information overload on such devices and interfaces which are difficult to navigate and interact with. This has become a major issue as mobile GIS applications are now being used by a wide group of users, including novice users such as tourists, for whom it is essential to provide easy-to-use applications. Despite this, comparatively little research has been conducted to address the mobility problem. We are particularly concerned with the limited interaction techniques available to users of mobile GIS which play a primary role in contributing to the complexity of using such an application whilst mobile. As such, our research focuses on multimodal interfaces as a means to present users with a wider choice of modalities for interacting with mobile GIS applications. Multimodal interaction is particularly advantageous in a mobile context, enabling users of location-based applications to choose the mode of input that best suits their current task and location. The focus of this article concerns a comprehensive user study which demonstrates the benefits of multimodal interfaces for mobile geospatial applications.  相似文献   

11.
Ship bridge systems are increasingly collected into Integrated Bridge Systems in modern offshore vessels. By integrating previously separate equipment, there is possible to create more user-friendly interfaces leading to safer and more efficient operations. A consequence of Integrated Bridge Systems is that it is now possible to rethink the make-up of ship bridge interfaces in its entirety. This article reports on a new interface concept for Integrated Ship Bridges developed in the research and innovation project, Ulstein Bridge Concept. The interface concept offers a connection between discrete and generic interaction methods on ship bridges by introducing touch sensitive physical interaction devices. We discuss the concept in light of calm technology and show how the new system offers peripheral interaction techniques limiting the need for generic interaction. Although more research is needed, we suggest the new system offer a promising pathway for better integrated ship interfaces by allowing for a better balance between discrete and generic interaction methods.  相似文献   

12.
Traditional approaches to natural language dialogue interface design have adopted ordinary human-human conversation as the model for online human-computer interaction. The attempt to deal with all the subtleties of natural dialogues, such as topic focus, coherence, ellipsis, pronominal reference, etc. has resulted in prototype systems that are enormously complex and computationally expensive.

In a series of experiments, we explored ways of minimizing the processing burden of a dialogue system by channeling user input towards a more tractable, though still natural, form of Englishlanguage questions. Through linking a pair of terminals, we presented subjects with two different dialogue styles as a framework for online help in the domain of word-processing. The first dialogue style involved ordinary conversational format. The second style involved a simulation of an automated dialogue system, including apparent processing restrictions and ‘system process messages’ to inform the subject of the steps taken by the system during query analysis. In both cases human tutors played the role of the help system. After each dialogue session, subjects were interviewed to determine their assessments of the naturalness and usability of the dialogue interface.

We found that user input became more tractable to parsing and query analysis as the dialogue style became more formalized, yet the subjective assessment of naturalness and usability remained fairly constant. This suggests that techniques for channeling user input in a dialogue system may be effectively employed to reduce processing demands without compromising the benefits of a natural language interface. Theoretically, this data lends support to the hypothesis that unrestricted human-human conversation is not the most appropriate model for the design of human-computer dialogue interfaces.  相似文献   


13.
The usage patterns of speech and visual input modes are investigated as a function of relative input mode efficiency for both desktop and personal digital assistant (PDA) working environments. For this purpose the form-filling part of a multimodal dialogue system is implemented and evaluated; three multimodal modes of interaction are implemented: ldquoClick-to-Talk,rdquo ldquoOpen-Mike,rdquo and ldquoModality-Selection.rdquo ldquoModality-Selectionrdquo implements an adaptive interface where the system selects the most efficient input mode at each turn, effectively alternating between a ldquoClick-to-Talkrdquo and ldquoOpen-Mikerdquo interaction style as proposed in ldquoModality tracking in the multimodal Bell Labs Communicator,rdquo in Proceedings of the Automatic Speech Recognition and Understanding Workshop, by A. Potamianos, , 2003. The multimodal systems are evaluated and compared with the unimodal systems. Objective and subjective measures used include task completion, task duration, turn duration, and overall user satisfaction. Turn duration is broken down into interaction time and inactivity time to better measure the efficiency of each input mode. Duration statistics and empirical probability density functions are computed as a function of interaction context and user. Results show that the multimodal systems outperform the unimodal systems in terms of objective and subjective criteria. Also, users tend to use the most efficient input mode at each turn; however, biases towards the default input modality and a general bias towards the speech modality also exists. Results demonstrate that although users exploit some of the available synergies in multimodal dialogue interaction, further efficiency gains can be achieved by designing adaptive interfaces that fully exploit these synergies.  相似文献   

14.
Embodied conversational agents (ECA) are a type of intelligent, multimodal computer interface that allow computers to interact with humans in a face-to-face manner. It is quite feasible that ECAs will someday replace the common keyboard as a human–computer interface. However, we have much to understand about how people interact with such embodied virtual agents. In this study, we performed a laboratory experiment, in an airport screening context, to assess how people’s linguistic behavior changes with their perceptions of the ECA’s power and likeability. The results show that people tend to manifest more verbal immediacy and expressivity, as well as offer more information about themselves, with ECAs they perceive as more likeable and less powerful.  相似文献   

15.
Interaction with future coming Smart Environments requires research on methods for the design of a new generation of human-environment interfaces. The paper outlines an original approach to the design of multimodal applications that, while valid for the integration on today’s devices, aims also to be sufficiently flexible so as to remain consistent in view of the transition to future Smart Environments, which will likely be structured in a more complex manner, requiring that interaction with services offered by the environment is made available through the integration of multimodal/unimodal interfaces provided through objects of everyday use. In line with the most recent research tendencies, the approach is centred not only on the user interface part of a system, but on the design of a comprehensive solution, including a dialogue model which is meant to provide a robust support layer on which multimodal interaction builds upon. Specific characteristics of the approach and of a sample application being developed to validate it are discussed in the paper, along with some implementation details.  相似文献   

16.
Web-based communication technologies such as YouTube can provide opportunities for social contact, especially between older and younger people, and help address issues of social isolation. Currently our understanding of the dynamics of social interaction within this context (particularly for older people) is limited. Elaborating upon this understanding will make it possible to proactively facilitate and support this form of intergenerational social contact. This study focuses on the experiences of an 80-year-old video blogger (vlogger), Geriatric1927, and a video dialogue that develops between himself and three of his younger viewers on a particular topic. Through a multimodal interactional analysis, we show how vloggers create a conversational context between one another through the YouTube website. In particular we describe how vloggers use different communicative modes to establish eye contact, take turns in conversation, share embodied gestures, share their understandings and negotiate simultaneous audiences. Despite a disconnected and ambiguous sense of the other, YouTube is able to facilitate a conversational context in which common ground is shared and social contact and intergenerational communication can occur.  相似文献   

17.
In this paper, we describe a user study evaluating the usability of an augmented reality (AR) multimodal interface (MMI). We have developed an AR MMI that combines free-hand gesture and speech input in a natural way using a multimodal fusion architecture. We describe the system architecture and present a study exploring the usability of the AR MMI compared with speech-only and 3D-hand-gesture-only interaction conditions. The interface was used in an AR application for selecting 3D virtual objects and changing their shape and color. For each interface condition, we measured task completion time, the number of user and system errors, and user satisfactions. We found that the MMI was more usable than the gesture-only interface conditions, and users felt that the MMI was more satisfying to use than the speech-only interface conditions; however, it was neither more effective nor more efficient than the speech-only interface. We discuss the implications of this research for designing AR MMI and outline directions for future work. The findings could also be used to help develop MMIs for a wider range of AR applications, for example, in AR navigation tasks, mobile AR interfaces, or AR game applications.  相似文献   

18.
The main task of a service robot with a voice-enabled communication interface is to engage a user in dialogue providing an access to the services it is designed for. In managing such interaction, inferring the user goal (intention) from the request for a service at each dialogue turn is the key issue. In service robot deployment conditions speech recognition limitations with noisy speech input and inexperienced users may jeopardize user goal identification. In this paper, we introduce a grounding state-based model motivated by reducing the risk of communication failure due to incorrect user goal identification. The model exploits the multiple modalities available in the service robot system to provide evidence for reaching grounding states. In order to handle the speech input as sufficiently grounded (correctly understood) by the robot, four proposed states have to be reached. Bayesian networks combining speech and non-speech modalities during user goal identification are used to estimate probability that each grounding state has been reached. These probabilities serve as a base for detecting whether the user is attending to the conversation, as well as for deciding on an alternative input modality (e.g., buttons) when the speech modality is unreliable. The Bayesian networks used in the grounding model are specially designed for modularity and computationally efficient inference. The potential of the proposed model is demonstrated comparing a conversational system for the mobile service robot RoboX employing only speech recognition for user goal identification, and a system equipped with multimodal grounding. The evaluation experiments use component and system level metrics for technical (objective) and user-based (subjective) evaluation with multimodal data collected during the conversations of the robot RoboX with users.  相似文献   

19.
Physical web interfaces is a concept that attempts to change our relationship to conventional Internet interfaces by providing a framework for bringing virtual processes into the real world. By attaching new physical inputs to things we perceive as truly virtual, we add a socio-critical dimension to the interaction of people and machine. This infiltration helps to augment our understanding of data pathways, explore virtual handicaps, regain control of how we access information, manifest metaphors as real entities, and integrate the Internet into already familiar interfaces. Combining form with function, these projects illustrate how simple interaction can lead to complex outcomes.  相似文献   

20.
In the early adoption phase of business-to-consumer (B2C) ecommerce, the tasks that proved most conducive to online consumer interaction typically were goal-directed, being clear in sequence and structure. A key challenge in ecommerce is the ability to design interfaces that support experiential tasks in addition to goal-directed tasks. Most of the ecommerce research on interface design, however, has focused on goal-directed tasks and has not addressed experiential tasks. Based on the literature from interface metaphors and mental models, this paper explores the use of tangible attributes derived from the physical business domain as a technique for designing an interface that effectively supports experiential tasks. A laboratory experiment was designed and conducted to test the impact of two types of interfaces and business domain familiarity when completing an experiential task. Because consumers need to retain and recall information to evaluate products/services or to make brand associations, retention/recall of information was measured on both the day of the treatment and after a 2-day lag. Results revealed that the interface based upon the business domain metaphor stimulated higher levels of retention and recall of information and thus provided the desired support for experiential tasks. Further, users with weaker domain familiarity showed the greatest improvement in retention and recall, particularly after a 2-day lag, when using the interface with the business domain metaphor design.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号