首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 218 毫秒
1.
In this paper, an empirical based study is described which has been conducted to gain a deeper understanding of the challenges faced by the visually impaired community when accessing the Web. The study, involving 30 blind and partially sighted computer users, has identified navigation strategies, perceptions of page layout and graphics using assistive devices such as screen readers. Analysis of the data has revealed that current assistive technologies impose navigational constraints and provide limited information on web page layout. Conveying additional spatial information could enhance the exploration process for visually impaired Internet users. It could also assist the process of collaboration between blind and sighted users when performing web-based tasks. The findings from the survey have informed the development of a non-visual interface, which uses the benefits of multimodal technologies to present spatial and navigational cues to the user.  相似文献   

2.
The overall quality of haptic user interfaces designed to support visually impaired students' science learning through sensorial feedback was systematically studied to investigate task performance and user behavior. Fourteen 6th- to 11th-grade students with visual impairments recruited from a state-funded blind school were asked to perform three main tasks (i.e., menu selection, structure exploration, and force recognition) using haptic user interfaces and a haptic device. This study used several dependent measures that are categorized into three types of variables: (a) task performance including success rate, workload, and task completion time; (b) user behavior defined as cursor movements proportionately represented from the user's cursor positional data; and (c) user preference. Results showed that interface type has significant effects on task performance, user behavior, and user preference, with varying degree of impact to participants with severe visual impairments performing the tasks. The results of this study as well as a set of refined design guidelines and principles should provide insights to the future research of haptic user interfaces that can be used when developing haptically enhanced science learning systems for the visually impaired.  相似文献   

3.
This paper investigates the addition of spatial auditory feedback as a tool to assist people with visual impairments in the use of computers, specifically in tasks involving iconic visual search. In this augmented interface, unique sounds were mapped to visual icons on the screen. As the screen cursor traversed the screen, the user heard sounds of nearby icons, spatially, according to the relative position of each icon with respect to the screen cursor. A software prototype of the design was developed to evaluate the performance of users in the search of icons within the proposed interface. Experiments were conducted with simulated visual impairments on volunteer participants to evaluate if the addition of spatial auditory feedback makes the interface more accessible to users with impaired vision. Results demonstrated that spatialization of icon sounds provides additional remote navigational information to users, enabling new strategies for task completion. Directions for future research are discussed and prioritized.  相似文献   

4.
Large displays have become ubiquitous in our everyday lives, but these displays are designed for sighted people.This paper addresses the need for visually impaired people to access targets on large wall-mounted displays. We developed an assistive interface which exploits mid-air gesture input and haptic feedback, and examined its potential for pointing and steering tasks in human computer interaction(HCI). In two experiments, blind and blindfolded users performed target acquisition tasks using mid-air gestures and two different kinds of feedback(i.e., haptic feedback and audio feedback). Our results show that participants perform faster in Fitts' law pointing tasks using the haptic feedback interface rather than the audio feedback interface. Furthermore, a regression analysis between movement time(MT) and the index of difficulty(ID)demonstrates that the Fitts' law model and the steering law model are both effective for the evaluation of assistive interfaces for the blind. Our work and findings will serve as an initial step to assist visually impaired people to easily access required information on large public displays using haptic interfaces.  相似文献   

5.
AudioGPS: Spatial Audio Navigation with a Minimal Attention Interface   总被引:1,自引:0,他引:1  
In this paper we describe a prototype spatial audio user interface for a Global Positioning System (GPS). The interface is designed to allow mobile users to carry out location tasks while their eyes, hands or attention are otherwise engaged. Audio user interfaces for GPS have typically been designed to meet the needs of visually impaired users, and generally, though not exclusively, employ speech-audio. In contrast, our prototype system uses a simple form of non-speech, spatial audio. This paper analyses various candidate audio mappings for location and distance information. A variety of tasks, design considerations, design trade-offs and opportunities are considered. The findings from pilot empirical testing are reported. Finally, opportunities for improvements to the system and for future evaluation are explored.  相似文献   

6.
Increasingly, computers are becoming tools of communication, information exploring and studying for young people, regardless of their abilities. Scientists have been building knowledge on how blind people can substitute hearing or touch for sight or how the combination of senses, i.e., multimodalities, can provide the user with an effective way of exploiting the power of computers. Evaluation of such multimodal user interfaces in the right context, i.e., appropriate users, tasks, tools and environment, is essential to give designers accurate feedback on blind users’ needs. This paper presents a study on how young blind people use computers for everyday tasks with the aids of assistive technologies, aiming to understand what hindrances they encounter when interacting with a computer using individual senses, and what supports them. A common assistive technology is a screen reader, producing output to a speech synthesizer or a Braille display. Those two modes are often used together, but the research studied how visually impaired students interact with computers using either form, i.e., a speech synthesizer or a Braille display. A usability test has been performed to assess blind grade-school students’ ability to carry out common tasks with the help of a computer, including solving mathematical problems, navigating the web, communicating with e-mail and using word processing. During the usability tests, students were allowed to use either auditory mode or tactile mode. Although blind users most commonly use a speech synthesizer (audio), the results indicate that this was not always the most suitable modality. While the effectiveness of the Braille display (tactile user interface) to accomplish certain tasks was similar to that of the audio user interface, the users’ satisfaction rate was higher. The contribution of this work lies in answering two research questions by analysing two modes of interaction (tactile and speech), while carrying out tasks of varying genre, i.e., web searching, collaboration through e-mail, word processing and mathematics. A second contribution of this work is the classification of observations into four categories: usability and accessibility, software fault, cognitive mechanism and learning method. Observations, practical recommendations and open research problems are then presented and discussed. This provides a framework for similar studies in the future. A third contribution of this work is the elaboration of practical recommendations for user interface designers and a research agenda for scientists.  相似文献   

7.
Although a large amount of research has been conducted on building interfaces for the visually impaired that allows users to read web pages and generate and access information on computers, little development addresses two problems faced by the blind users. First, sighted users can rapidly browse and select information they find useful, and second, sighted users can make much useful information portable through the recent proliferation of personal digital assistants (PDAs). These possibilities are not currently available for blind users. This paper describes an interface that has been built on a standard PDA and allows its user to browse the information stored on it through a combination of screen touches coupled with auditory feedback. The system also supports the storage and management of personal information so that addresses, music, directions, and other supportive information can be readily created and then accessed anytime and anywhere by the PDA user. The paper describes the system along with the related design choices and design rationale. A user study is also reported.  相似文献   

8.
Design paradigms often ignore the diverse goals users bring to the computer interface. Any human-computer interaction can be viewed as a marriage of two systems: The user begins the interaction by formulating an information goal, and the computer software meets that goal with a sometimes complex list of potential topic areas. The user then accesses that topic list through the computer interface. Part of the act of accessing the topic list is selecting a potential topic, and this action is often supported by a menu interface. Although research is pervasive on how best to organize menu items to facilitate learning, search speed, and reduced selection errors, little has been done to examine the impact of different types of user goals or cues on a menu's effectiveness. In a study using three distinct cues-direct match, synonym, and iconic - and two menu organizations - alphabetical and functional-data suggest that (a) the functional menu is more effective than the alphabetical menu for the synonym and iconic cues, (b) learning occurs with both menu designs (i.e., selection speed increases rapidly across the five trial blocks), and (c) users make fewer errors with the functionally organized menu. The results, in general, encourage more rigorous investigation of the interaction between the tasks users bring to menu interfaces and the optimal design of those menus.  相似文献   

9.
This paper describes a user study on interaction with a mobile device installed in a driving simulator. Two new auditory interfaces were proposed and their effectiveness and efficiency were compared to a standard visual interface. Both auditory interfaces consisted of spatialized auditory cues representing individual items in the hierarchical structure of the menu. In the first auditory interface all items of the current level of the menu were played simultaneously. In the second auditory interface only one item was played at a time. The visual interface was shown on a small in-vehicle LCD screen on the dashboard. In all three cases, a custom-made interaction device (a scrolling wheel and two buttons) attached to the steering wheel was used for controlling the interface. The driving performance, task completion times, perceived workload and overall user satisfaction were evaluated. The experiment proved that both auditory interfaces were effective to use in a mobile environment, but were not faster than the visual interface. In the case of shorter tasks, e.g. changing the active profile or deleting an image, the task completion times were comparable for all interfaces; however, both the driving performance was significantly better and the perceived workload was lower when using the auditory interfaces. The test subjects also reported a high overall satisfaction with the auditory interfaces. The latter were labelled as easier to use, more satisfying and more adequate for performing the required tasks than the visual interface. The results of the survey are not surprising as there is a stronger competition for the visual attention between the visual interface and the primary task (driving the car) than in the case of using the auditory interface. So although both types of interfaces were proven to be effective, the visual interface was less efficient as it strongly distracted the user from performing the primary task.  相似文献   

10.
As advantages and disadvantages of graphical user interfaces are still controversial, this study focuses on an empirical comparison of a desktop interface (GUI) and a conventional user interface with menu selection (CUI). A total of 24 users (six novices and six experts with GUI; six novices and six experts with CUI). were given 20 benchmark tasks. Except for an introduction given by the investigator (1·5 h) the beginners had no or very little previous experience with electronic data processing, while the experts had previous experience of 3,700 h (desktop) or 7,500 h (menu selection), respectively. The results showed for both beginners and experts a statistically significant superiority of GUI of the desktop user interface with 'mouse' over the conventional user interface with menu selection and function keys (CUI). The experts in GUI needed 51% less time to complete the tasks averaged across all tasks, as compared to the experts using CUI. Moreover a significant interaction was found between tasks and user interfaces in the context of GUI.  相似文献   

11.
To access interactive systems, blind users can leverage their auditory senses by using non-speech sounds. The structure of existing non-speech sounds, however, is geared toward conveying atomic operations at the user interface (e.g., opening a file) rather than evoking broader, theme-based content typical of educational material (e.g., an historical event). To address this problem, we investigate audemes, a new category of non-speech sounds whose semiotic structure and flexibility open new horizons for the aural interaction with content-rich applications. Three experiments with blind participants examined the attributes of an audeme that most facilitate the accurate recognition of their meaning. A sequential concatenation of different sound types (music, sound effect) yielded the highest meaning recognition, whereas an overlapping arrangement of sounds of the same type (music, music) yielded the lowest meaning recognition. We discuss seven guidelines to design well-formed audemes.  相似文献   

12.
This paper introduces a novel interface designed to help blind and visually impaired people to explore and navigate on the Web. In contrast to traditionally used assistive tools, such as screen readers and magnifiers, the new interface employs a combination of both audio and haptic features to provide spatial and navigational information to users. The haptic features are presented via a low-cost force feedback mouse allowing blind people to interact with the Web, in a similar fashion to their sighted counterparts. The audio provides navigational and textual information through the use of non-speech sounds and synthesised speech. Interacting with the multimodal interface offers a novel experience to target users, especially to those with total blindness. A series of experiments have been conducted to ascertain the usability of the interface and compare its performance to that of a traditional screen reader. Results have shown the advantages that the new multimodal interface offers blind and visually impaired people. This includes the enhanced perception of the spatial layout of Web pages, and navigation towards elements on a page. Certain issues regarding the design of the haptic and audio features raised in the evaluation are discussed and presented in terms of recommendations for future work.  相似文献   

13.
This paper presents the design and evaluation of a hypermedia system for blind users, making use of a nonvisual interface, non-speech sounds, three input devices, and a 37 node hypermedia module. The important components of an effective auditory interface are discussed, together with the design of the auditory interface to hypermedia material. The evaluation, which was conducted over several weeks and used a range of complementary objective and subjective measures to assess users' performance and preferences, is described. The findings from the evaluation with nine visually impaired student participants are presented. The results from this research can be applied to the design and evaluation of other non-visual hypermedia systems, such as auditory World Wide Web (WWW) browsers and digital talking books.  相似文献   

14.
D Griffith 《Human factors》1990,32(4):467-475
Suitably adapted computers hold considerable potential for integrating people who are blind or visually impaired into the mainstream. The principal problems that preclude the achievement of this potential are human factors issues. These issues are discussed, and the problems presented by icon-based interfaces are reviewed. An argument is offered that these issues, which ostensibly pertain to the blind or visually impaired user, are fundamental issues confronting all users. There is reason to hope that the benefits of research into the human factors issues of people with vision impairments will also extend to the sighted user.  相似文献   

15.
While progress on assistive technologies have been made, some blind users still face several problems opening and using basic functionalities when interacting with touch interfaces. Sometimes, people with visual impairments may also have problems navigating autonomously, without personal assistance, especially in unknown environments. This paper presents a complete solution to manage the basic functions of a smartphone and to guide users using a wayfinding application. This way, a blind user could go to work from his home in an autonomous way using an adaptable wayfinding application on his smartphone. The wayfinding application combines text, map, auditory and tactile feedback for providing the information. Eighteen visually impaired users tested the application. Preliminary results from this study show that blind people and limited vision users can effectively use the wayfinding application without help. The evaluation also confirms the usefulness of extending the vibration feedback to convey distance information as well as directional information. The validation was successful for iOS and Android devices.  相似文献   

16.
Reichbach  J.D. Kemmerer  R.A. 《Computer》1992,25(3):25-37
SoundWorks, an object-oriented distributed system that lets users interactively manipulate sound through a graphical interface, is discussed. The system handles digitally sampled sounds as well as those generated by software and digital signal processing hardware. An overview of the different types of sounds and window interfaces provided by SoundWorks and of the operations that modify these sounds is presented. The high-level architecture and the design of the SoundWorks system the protocol defined between the user interface code and client application, and the sound kernel specification that manages sounds and lines, performs operations on sounds, and interfaces to the digital hardware are described. The NEWS application programming environment, which provided the necessary primitive graphic items for a graphical window-based interface and offered an object-oriented approach for development of the SoundWorks system, is also discussed  相似文献   

17.
Auditory interfaces can overcome visual interfaces when a primary task, such as driving, competes for the attention of a user controlling a device, such as radio. In emerging interfaces enabled by camera tracking, auditory displays may also provide viable alternatives to visual displays. This paper presents a user study of interoperable auditory and visual menus, in which control gestures remain the same in the visual and the auditory domain. Tested control methods included a novel free-hand gesture interaction with camera-based tracking, and touch screen interaction with a tablet. The task of the participants was to select numbers from a visual or an auditory menu including a circular layout and a numeric keypad layout. Results show, that even with participant's full attention to the task, the performance and accuracy of the auditory interface are the same or even slightly better than the visual when controlled with free-hand gestures. The auditory menu was measured to be slower in touch screen interaction, but questionnaire revealed that over half of the participants felt that the circular auditory menu was faster than the visual menu. Furthermore, visual and auditory feedback in touch screen interaction with numeric layout was measured fastest, touch screen with circular menu second fastest, and the free-hand gesture interface was slowest. The results suggest that auditory menus can potentially provide a fast and desirable interface to control devices with free-hand gestures.  相似文献   

18.
In this paper, we compare four different auditory displays in a mobile audio-augmented reality environment (a sound garden). The auditory displays varied in the use of non-speech audio, Earcons, as auditory landmarks and 3D audio spatialization, and the goal was to test the user experience of discovery in a purely exploratory environment that included multiple simultaneous sound sources. We present quantitative and qualitative results from an initial user study conducted in the Municipal Gardens of Funchal, Madeira. Results show that spatial audio together with Earcons allowed users to explore multiple simultaneous sources and had the added benefit of increasing the level of immersion in the experience. In addition, spatial audio encouraged a more exploratory and playful response to the environment. An analysis of the participants’ logged data suggested that the level of immersion can be related to increased instances of stopping and scanning the environment, which can be quantified in terms of walking speed and head movement.  相似文献   

19.
Visually impaired people have a lack of proper user interfaces to allow them to easily make use of modern technology. This problem may be solved with multimodal user interfaces that should be designed taking into account the type and degree of disability. The purpose of the study presented in this article was to create usable games for visually impaired children making use of low-cost vibro-tactile devices in multimodal applications. A tactile memory game using multimodal navigation support with high-contrast visual feedback and audio cues was implemented. The game was designed to be played with a tactile gamepad. Different vibrations were to be remembered instead of sounds or embossed pictures that are common in memory games for blind children. The usability and playability of the game was tested with a group of seven 12–13-year-old visually impaired children. The results showed that the game design was successful and a tactile gamepad was usable. The game got a positive response from the focus group.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号