共查询到20条相似文献,搜索用时 22 毫秒
2.
When we begin to build and interact with machines or robots that either look like humans or have human functionalities and capabilities, then people may well interact with their human-like machines in ways that mimic human-human communication. For example, if a robot has a face, a human might interact with it similarly to how humans interact with other creatures with faces, Specifically, a human might talk to it, gesture to it, smile at it, and so on. If a human interacts with a computer or a machine that understands spoken commands, the human might converse with the machine, expecting it to have competence in spoken language. In our research on a multimodal interface to mobile robots, we have assumed a model of communication and interaction that, in a sense, mimics how people communicate. Our interface therefore incorporates both natural language understanding and gesture recognition as communication modes. We limited the interface to these two modes to simplify integrating them in the interface and to make our research more tractable. We believe that with an integrated system, the user is less concerned with how to communicate (which interactive mode to employ for a task), and is therefore free to concentrate on the tasks and goals at hand. Because we integrate all our system's components, users can choose any combination of our interface's modalities. The onus is on our interface to integrate the input, process it, and produce the desired results. 相似文献
3.
The Internet revolution was largely due to a killer interface based on HTML. Now intranets are poised to trigger the next paradigm in Web interfaces. Industrial applications in particular require a shift in orientation from pushing data to users to presenting them with sophisticated process-based interfaces. The specific theme of this paper is process-oriented user interfaces as opposed to data-oriented interfaces 相似文献
4.
The first program I wrote on a salary was scientific calculation software in Fortran. As I was writing, I noticed that the code running the primitive menu system differed in style from the code carrying out the calculations. So I separated the routines for these tasks, which paid off when I was asked to create higher-level tasks that did several of the individual menu steps. I could just write a routine that called the calculation routines directly without involving the menus. Thus, I learned for myself a design principle that's served me well in software development: Keep your user interface code separate from everything else. It's a simple rule, embodied into more than one application framework, but it's often not followed, which causes quite a bit of trouble. 相似文献
5.
This paper describes the user interface facilities of the ECLIPSE integrated project support environment. This interface is based on a consistent metaphor called the ‘control panel’ metaphor and includes standard help and message-handling systems. The paper describes these as well as some of the interface standards which have been developed. The interface has been implemented on top of the ‘applications interface’, which provides a portable, hardware-independent interface for software tools. 相似文献
6.
The evaluation of the usability and the learnability of a computer system may be performed with predictive models during the design phase. It may be done on the executable code as well as by observing the user in action. In the latter case, data collected in vivo must be processed. The goal is to provide software supports for performing this difficult and time consuming task. The paper presents an early analysis of, and experience relating to, the automatic evaluation of multimodal user interfaces. With this end in view, a generic Wizard of Oz platform has been designed to allow the observation and automatic recording of subjects' behavior while they interact with a multimodal interface. It is then shown how recorded data can be analyzed to detect behavioral patterns, and how deviations of such patterns from a data-flow-oriented task model can be exploited by a software usability critic. 相似文献
7.
User interface design and coding can be complex and messy. We describe a system that uses code search to simplify and automate the exploration of such code. We start with a simple sketch of the desired interface along with a set of keywords describing the application context. If necessary, we convert the sketch into a scalable vector graphics diagram. We then use existing code search engines to find results based on the keywords. We look for potential Java-based graphical user interface solutions within those results and apply a series of code transformations to the solutions to generate derivative solutions, aiming to get solutions that constitute only the user interface and that will compile and run. We run the resultant solutions and compare the generated interfaces to the user’s sketches. Finally, we let programmers interact with the matched solutions and return the running code for the solutions they choose. The system is useful for exploring alternative interfaces to the initial and for looking at graphical user interfaces in a code repository. 相似文献
8.
Abstract This article defines a quantitative goal that is cheap to measure for the usability of a business application system for casual users. The article also describes a cost-effective method for attaining the goal. The goal is to eliminate all user interface disasters (UIDs) in a given system. UIDs are usability problems that seriously annoy users, or prevent them from accomplishing their work without help from a human being. The method consists of series of simple user tests without audio or video recording, and with little analysis after each user test. The article concludes by describing Baltica's results of applying the method to a medium-size business application for casual users. 相似文献
9.
Historically, the Multimedia community research has focused on output modalities, through studies on timing and multimedia processing. The Multimodal Interaction community, on the other hand, has focused on user-generated modalities, through studies on Multimodal User Interfaces (MUI). In this paper, aiming to assist the development of multimedia applications with MUIs, we propose the integration of concepts from those two communities in a unique high-level programming framework. The framework integrates user modalities —both user-generated (e.g., speech, gestures) and user-consumed (e.g., audiovisual, haptic)— in declarative programming languages for the specification of interactive multimedia applications. To illustrate our approach, we instantiate the framework in the NCL (Nested Context Language) multimedia language. NCL is the declarative language for developing interactive applications for Brazilian Digital TV and an ITU-T Recommendation for IPTV services. To help evaluate our approach, we discuss a usage scenario and implement it as an NCL application extended with the proposed multimodal features. Also, we compare the expressiveness of the multimodal NCL against existing multimedia and multimodal languages, for both input and output modalities. 相似文献
10.
The author discusses enhanced robustness for three multimodal interface types: speech and pen, speech and lip movements, and multibiometric (physiological and behavioral) input. 相似文献
12.
In contrast to a traditional setting where users express queries against the database schema, we assert that the semantics of data can often be understood by viewing the data in the context of the user interface (UI) of the software tool used to enter the data. That is, we believe that users will understand the data in a database by seeing the labels, drop-down menus, tool tips, or other help text that are built into the user interface. Our goal is to allow domain experts with little technical skill to understand and query data. In this paper, we present our GUi As View (Guava) framework and describe how we use forms-based UIs to generate a conceptual model that represents the information in the user interface. We then describe how we generate a query interface from the conceptual model. We characterize the resulting query language using a subset of the relational algebra. Since most application developers want to craft a physical database to meet desired performance needs, we present here a transformation channel that can be configured by instantiating one or more of our transformation operators. The channel, once configured, automatically transforms queries from our query interface into queries that address the underlying physical database and delivers query results that conform to our query interface. In this paper, we define and formalize our database transformation operators. The contributions of this paper are that first, we demonstrate the feasibility of creating a query interface based directly on the user interface and second, we introduce a general purpose database transformation channel that will likely shorten the application development process and increase the quality of the software by automatically generating software artifacts that are often made manually and are prone to errors. 相似文献
13.
界面模板是一种崭新的界面设计模式,提出了基于界面模型的界面模板概念,在支持界面自动生成的界面开发方法中实现从抽象界面到具体界面的转化。讨论了界面模板的构成与表达、界面模板的分类以及界面模板库体系结构,说明了界面模板的用法。 相似文献
14.
Motivation: The ability to directly trace how requirements are implemented in a software system is crucial in domains that require a high level of trust (e.g. medicine, law, crisis management). This paper describes an approach that allows a high level of traceability to be achieved with model-driven engineering supported by automated reasoning. The paper gives an introduction to the novel, automated user interface synthesis in which a set of requirements is automatically translated into a working application. It is presented as a generalization of the current state of the art model-driven approaches both from the conceptual perspective as well as the concrete implementation is discussed together with its advantages like the alignment of business logic with the application and ease of adaptability. It also presents how a high level of traceability can be obtained if runtime support of automated reasoning over models is applied. Results: We have defined the Automated Reasoning-Based User Interface (ARBUI) approach and implemented a framework for application programming that follows our definition. The framework, called Semantic MVC, is based on model-driven engineering principles enhanced with W3C standards for the semantic web. We will present the general architecture and main ideas underlying our approach and framework. Finally, we will present a practical application of the Semantic MVC that we created in the medical domain as a Clinical Decision Support System for GIST cancer in cooperation with the Maria Sklodowska-Curie Memorial Cancer Center and Institute of Oncology in Warsaw. The discussed expert system allows the expert to directly modify the executable knowledge on the fly, making the overall system cost effective. 相似文献
15.
Formal approaches to software development require that we correctly describe (or specify) systems in order to prove properties about our proposed solution prior to building it. We must then follow a rigorous process to transform our specification into an implementation to ensure that the properties we have proved are retained. Different transformation, or refinement, methods exist for different formal methods, but they all seek to ensure that we can guide the transformation in a way which preserves the desired properties of the system. Refinement methods also allow us to subsequently compare two systems to see if a refinement relation exists between the two. When we design and build the user interfaces of our systems we are similarly keen to ensure that they have certain properties before we build them. For example, do they satisfy the requirements of the user? Are they designed with known good design principles and usability considerations in mind? Are they correct in terms of the overall system specification? However, when we come to implement our interface designs we do not have a defined process to follow which ensures that we maintain these properties as we transform the design into code. Instead, we rely on our judgement and belief that we are doing the right thing and subsequent user testing to ensure that our final solution remains useable and satisfactory. We suggest an alternative approach, which is to define a refinement process for user interfaces which will allow us to maintain the same rigorous standards we apply to the rest of the system when we implement our user interface designs. 相似文献
18.
Coupling mobile devices and other remote interaction technology with software systems surrounding the user enables for building interactive environments under explicit user control. The realization of explicit interaction in ubiquitous or pervasive computing environments introduces a physical distribution of input devices, and technology embedded into the environment of the user. To fulfill the requirements of emerging trends in mobile interaction, common approaches for system design need adaptations and extensions. This paper presents the adaptation and extension of the Model-View-Controller approach to design applications of remote, complementary, duplicated and detached user interface elements. 相似文献
19.
Software developers face many difficult decisions when building new applications, not the least of which is the design of the graphical user interface. The answer to one question-is it better to use a GUI development tool or build it manually?-is relatively straightforward. Today's tools offer several benefits that manual coding does not. Because these tools often provide a simple graphical interface for developing displays, nonprogrammers and human factors engineers can contribute their expertise. Also, if the schedule permits, a tool can be used to build prototypes throughout the development cycle; some tools even provide a test/prototype mode for testing displays without compiling and executing the entire application. And finally, end users can evaluate each prototype and provide feedback, increasing their satisfaction with the final product 相似文献
|