首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper reports the utility of eye-gaze,voice and manual response in the design of multimodal user interface.A device-and application-independent user interface model(VisualMan)of 3D object selection and manipulation was developed and validated in a prototype interface based on a 3D cube manipulation task.The multimodal inpus are integrated in the prototype interface based on the priority of modalities and interaction context.The implications of the model for virtual reality interface are discussed and a virtual environment using the multimodal user interface model is proposed.  相似文献   

2.
3.
Subjects practiced drawing a figure on a computer screen by means of two interaction styles: Command codes and direct manipulation. These two interaction styles demanded different cognitive resources of the user and different times to perform the task. After practice, subjects performed multiple trials in three experimental conditions: in a time‐limit condition, with office noise, and in a neutral working condition. The drawing task was performed concurrently with one of three secondary tasks, each tapping different resources. In Experiment 1, subjects were divided into two equal size groups. Each group performed the task with only one of the two learned interaction styles. Secondary and primary task performance indicated no difference in workload between the two interaction styles. Only at the most demanding secondary task did the use of command codes result in higher workload. In Experiment 2, subjects performed the drawing task in each trial using their preferred interaction style. Consistent individual preferences for the interaction styles and a flexible use of the styles according to working conditions emerged with improved performance.  相似文献   

4.
The Internet revolution was largely due to a killer interface based on HTML. Now intranets are poised to trigger the next paradigm in Web interfaces. Industrial applications in particular require a shift in orientation from pushing data to users to presenting them with sophisticated process-based interfaces. The specific theme of this paper is process-oriented user interfaces as opposed to data-oriented interfaces  相似文献   

5.
Fowler  M. 《Software, IEEE》2001,18(2):96-97
The first program I wrote on a salary was scientific calculation software in Fortran. As I was writing, I noticed that the code running the primitive menu system differed in style from the code carrying out the calculations. So I separated the routines for these tasks, which paid off when I was asked to create higher-level tasks that did several of the individual menu steps. I could just write a routine that called the calculation routines directly without involving the menus. Thus, I learned for myself a design principle that's served me well in software development: Keep your user interface code separate from everything else. It's a simple rule, embodied into more than one application framework, but it's often not followed, which causes quite a bit of trouble.  相似文献   

6.
Building a multimodal human-robot interface   总被引:3,自引:0,他引:3  
When we begin to build and interact with machines or robots that either look like humans or have human functionalities and capabilities, then people may well interact with their human-like machines in ways that mimic human-human communication. For example, if a robot has a face, a human might interact with it similarly to how humans interact with other creatures with faces, Specifically, a human might talk to it, gesture to it, smile at it, and so on. If a human interacts with a computer or a machine that understands spoken commands, the human might converse with the machine, expecting it to have competence in spoken language. In our research on a multimodal interface to mobile robots, we have assumed a model of communication and interaction that, in a sense, mimics how people communicate. Our interface therefore incorporates both natural language understanding and gesture recognition as communication modes. We limited the interface to these two modes to simplify integrating them in the interface and to make our research more tractable. We believe that with an integrated system, the user is less concerned with how to communicate (which interactive mode to employ for a task), and is therefore free to concentrate on the tasks and goals at hand. Because we integrate all our system's components, users can choose any combination of our interface's modalities. The onus is on our interface to integrate the input, process it, and produce the desired results.  相似文献   

7.
Abstract

This article defines a quantitative goal that is cheap to measure for the usability of a business application system for casual users. The article also describes a cost-effective method for attaining the goal. The goal is to eliminate all user interface disasters (UIDs) in a given system. UIDs are usability problems that seriously annoy users, or prevent them from accomplishing their work without help from a human being. The method consists of series of simple user tests without audio or video recording, and with little analysis after each user test. The article concludes by describing Baltica's results of applying the method to a medium-size business application for casual users.  相似文献   

8.
User interface design and coding can be complex and messy. We describe a system that uses code search to simplify and automate the exploration of such code. We start with a simple sketch of the desired interface along with a set of keywords describing the application context. If necessary, we convert the sketch into a scalable vector graphics diagram. We then use existing code search engines to find results based on the keywords. We look for potential Java-based graphical user interface solutions within those results and apply a series of code transformations to the solutions to generate derivative solutions, aiming to get solutions that constitute only the user interface and that will compile and run. We run the resultant solutions and compare the generated interfaces to the user’s sketches. Finally, we let programmers interact with the matched solutions and return the running code for the solutions they choose. The system is useful for exploring alternative interfaces to the initial and for looking at graphical user interfaces in a code repository.  相似文献   

9.
Towards automatic evaluation of multimodal user interfaces   总被引:1,自引:0,他引:1  
J. Coutaz  D. Salber  S. Balbo 《Knowledge》1993,6(4):267-274
The evaluation of the usability and the learnability of a computer system may be performed with predictive models during the design phase. It may be done on the executable code as well as by observing the user in action. In the latter case, data collected in vivo must be processed. The goal is to provide software supports for performing this difficult and time consuming task.

The paper presents an early analysis of, and experience relating to, the automatic evaluation of multimodal user interfaces. With this end in view, a generic Wizard of Oz platform has been designed to allow the observation and automatic recording of subjects' behavior while they interact with a multimodal interface. It is then shown how recorded data can be analyzed to detect behavioral patterns, and how deviations of such patterns from a data-flow-oriented task model can be exploited by a software usability critic.  相似文献   


10.
This paper describes the user interface facilities of the ECLIPSE integrated project support environment. This interface is based on a consistent metaphor called the ‘control panel’ metaphor and includes standard help and message-handling systems. The paper describes these as well as some of the interface standards which have been developed. The interface has been implemented on top of the ‘applications interface’, which provides a portable, hardware-independent interface for software tools.  相似文献   

11.
Tools which provide graphical editing techniques for the design of user interface presentations are increasingly commonplace. Such tools vary widely in the mechanisms used to define user interfaces and while some are general purpose, others are targeted at particular application domains. Designers faced with varying requirements must choose one tool and live with its shortcomings, purchase a number of different tools, or implement their own. The paper describes an approach to facilitating the latter by providing a library of augmented user interface components called MOG objects which embody both end-user and editing semantics. User interface design tools based on this approach need only provide mechanisms for composing MOG objects into user interfaces and the addition of any other, higher-level functionality. MOG-based user interfaces retain an in-built editing capability and are inherently tailorable.  相似文献   

12.
界面模板是一种崭新的界面设计模式,提出了基于界面模型的界面模板概念,在支持界面自动生成的界面开发方法中实现从抽象界面到具体界面的转化。讨论了界面模板的构成与表达、界面模板的分类以及界面模板库体系结构,说明了界面模板的用法。  相似文献   

13.

Historically, the Multimedia community research has focused on output modalities, through studies on timing and multimedia processing. The Multimodal Interaction community, on the other hand, has focused on user-generated modalities, through studies on Multimodal User Interfaces (MUI). In this paper, aiming to assist the development of multimedia applications with MUIs, we propose the integration of concepts from those two communities in a unique high-level programming framework. The framework integrates user modalities —both user-generated (e.g., speech, gestures) and user-consumed (e.g., audiovisual, haptic)— in declarative programming languages for the specification of interactive multimedia applications. To illustrate our approach, we instantiate the framework in the NCL (Nested Context Language) multimedia language. NCL is the declarative language for developing interactive applications for Brazilian Digital TV and an ITU-T Recommendation for IPTV services. To help evaluate our approach, we discuss a usage scenario and implement it as an NCL application extended with the proposed multimodal features. Also, we compare the expressiveness of the multimodal NCL against existing multimedia and multimodal languages, for both input and output modalities.

  相似文献   

14.
In contrast to a traditional setting where users express queries against the database schema, we assert that the semantics of data can often be understood by viewing the data in the context of the user interface (UI) of the software tool used to enter the data. That is, we believe that users will understand the data in a database by seeing the labels, drop-down menus, tool tips, or other help text that are built into the user interface. Our goal is to allow domain experts with little technical skill to understand and query data. In this paper, we present our GUi As View (Guava) framework and describe how we use forms-based UIs to generate a conceptual model that represents the information in the user interface. We then describe how we generate a query interface from the conceptual model. We characterize the resulting query language using a subset of the relational algebra. Since most application developers want to craft a physical database to meet desired performance needs, we present here a transformation channel that can be configured by instantiating one or more of our transformation operators. The channel, once configured, automatically transforms queries from our query interface into queries that address the underlying physical database and delivers query results that conform to our query interface. In this paper, we define and formalize our database transformation operators. The contributions of this paper are that first, we demonstrate the feasibility of creating a query interface based directly on the user interface and second, we introduce a general purpose database transformation channel that will likely shorten the application development process and increase the quality of the software by automatically generating software artifacts that are often made manually and are prone to errors.  相似文献   

15.
16.
In contrast to a traditional setting where users express queries against the database schema, we assert that the semantics of data can often be understood by viewing the data in the context of the user interface (UI) of the software tool used to enter the data. That is, we believe that users will understand the data in a database by seeing the labels, drop-down menus, tool tips, or other help text that are built into the user interface. Our goal is to allow domain experts with little technical skill to understand and query data. In this paper, we present our GUi As View (Guava) framework and describe how we use forms-based UIs to generate a conceptual model that represents the information in the user interface. We then describe how we generate a query interface from the conceptual model. We characterize the resulting query language using a subset of the relational algebra. Since most application developers want to craft a physical database to meet desired performance needs, we present here a transformation channel that can be configured by instantiating one or more of our transformation operators. The channel, once configured, automatically transforms queries from our query interface into queries that address the underlying physical database and delivers query results that conform to our query interface. In this paper, we define and formalize our database transformation operators. The contributions of this paper are that first, we demonstrate the feasibility of creating a query interface based directly on the user interface and second, we introduce a general purpose database transformation channel that will likely shorten the application development process and increase the quality of the software by automatically generating software artifacts that are often made manually and are prone to errors.  相似文献   

17.
提出基于感知控制的评估方法,将心理学、传统评估方法和感知控制理论相结合,并提出了新的评估准则。该方法首先对界面的状态性和共享操作性进行评估,然后才是有效、高效性评估,旨在使评估后的用户界面满足普适环境下可用性需求。  相似文献   

18.
Motivation: The ability to directly trace how requirements are implemented in a software system is crucial in domains that require a high level of trust (e.g. medicine, law, crisis management). This paper describes an approach that allows a high level of traceability to be achieved with model-driven engineering supported by automated reasoning. The paper gives an introduction to the novel, automated user interface synthesis in which a set of requirements is automatically translated into a working application. It is presented as a generalization of the current state of the art model-driven approaches both from the conceptual perspective as well as the concrete implementation is discussed together with its advantages like the alignment of business logic with the application and ease of adaptability. It also presents how a high level of traceability can be obtained if runtime support of automated reasoning over models is applied.Results: We have defined the Automated Reasoning-Based User Interface (ARBUI) approach and implemented a framework for application programming that follows our definition. The framework, called Semantic MVC, is based on model-driven engineering principles enhanced with W3C standards for the semantic web. We will present the general architecture and main ideas underlying our approach and framework. Finally, we will present a practical application of the Semantic MVC that we created in the medical domain as a Clinical Decision Support System for GIST cancer in cooperation with the Maria Sklodowska-Curie Memorial Cancer Center and Institute of Oncology in Warsaw. The discussed expert system allows the expert to directly modify the executable knowledge on the fly, making the overall system cost effective.  相似文献   

19.
Formal approaches to software development require that we correctly describe (or specify) systems in order to prove properties about our proposed solution prior to building it. We must then follow a rigorous process to transform our specification into an implementation to ensure that the properties we have proved are retained. Different transformation, or refinement, methods exist for different formal methods, but they all seek to ensure that we can guide the transformation in a way which preserves the desired properties of the system. Refinement methods also allow us to subsequently compare two systems to see if a refinement relation exists between the two. When we design and build the user interfaces of our systems we are similarly keen to ensure that they have certain properties before we build them. For example, do they satisfy the requirements of the user? Are they designed with known good design principles and usability considerations in mind? Are they correct in terms of the overall system specification? However, when we come to implement our interface designs we do not have a defined process to follow which ensures that we maintain these properties as we transform the design into code. Instead, we rely on our judgement and belief that we are doing the right thing and subsequent user testing to ensure that our final solution remains useable and satisfactory. We suggest an alternative approach, which is to define a refinement process for user interfaces which will allow us to maintain the same rigorous standards we apply to the rest of the system when we implement our user interface designs.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号