首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
This paper introduces a novel interface designed to help blind and visually impaired people to explore and navigate on the Web. In contrast to traditionally used assistive tools, such as screen readers and magnifiers, the new interface employs a combination of both audio and haptic features to provide spatial and navigational information to users. The haptic features are presented via a low-cost force feedback mouse allowing blind people to interact with the Web, in a similar fashion to their sighted counterparts. The audio provides navigational and textual information through the use of non-speech sounds and synthesised speech. Interacting with the multimodal interface offers a novel experience to target users, especially to those with total blindness. A series of experiments have been conducted to ascertain the usability of the interface and compare its performance to that of a traditional screen reader. Results have shown the advantages that the new multimodal interface offers blind and visually impaired people. This includes the enhanced perception of the spatial layout of Web pages, and navigation towards elements on a page. Certain issues regarding the design of the haptic and audio features raised in the evaluation are discussed and presented in terms of recommendations for future work.  相似文献   

2.
Usability tests are a part of user-centered design. Usability testing with disabled people is necessary, if they are among the potential users. Several researchers have already investigated usability methods with sighted people. However, research with blind users is insufficient, for example, due to different knowledge on the use of assistive technologies and the ability to analyze usability issues from inspection of non-visual output of assistive devices. From here, the authors aspire to extend theory and practice by investigating four usability methods involving the blind, visually impaired and sighted people. These usability methods comprise of local test, synchronous remote test, tactile paper prototyping and computer-based prototyping. In terms of effectiveness of evaluation and the experience of participants and the facilitator, local tests were compared with synchronous remote tests and tactile paper prototyping with computer-based prototyping. Through the comparison of local and synchronous remote tests, it has been found that the number of usability problems uncovered in different categories with both approaches was comparable. In terms of task completion time, there is a significant difference for blind participants, but not for the visually impaired and sighted. Most of the blind and visually impaired participants prefer the local test. As for the comparison of tactile paper prototyping and computer-based prototyping, it has been revealed that tactile paper prototyping provides a better overview of an application while the interaction with computer-based prototypes is closer to reality. Problems regarding the planning and conducting of these methods as they arise in particular with blind people were also discussed. Based on the authors’ experiences, recommendations were provided for dealing with these problems from both the technical and the organization perspectives.  相似文献   

3.
Visually impaired users require web page interfaces that they can interact with. Providing such interfaces is necessary to serve this community, and is required by government regulation. This paper describes the unique web access requirements of visually impaired users. It reviews surveys of the suitability of web sites for the partially sighted and blind. Most web sites do not meet the requirements of the UK and other governments. In addition, serving the visually impaired is a marketing opportunity for organizations, as there is an ever increasing population with potential sight problems.  相似文献   

4.
The Web has evolved into a dominant digital medium for conducting many types of online transactions such as shopping, paying bills, making travel plans, etc. Such transactions typically involve a number of steps spanning several Web pages. For sighted users these steps are relatively straightforward to do with graphical Web browsers. But they pose tremendous challenges for visually impaired individuals. This is because screen readers, the dominant assistive technology used by visually impaired users, function by speaking out the screen’s content serially. Consequently, using them for conducting transactions can cause considerable information overload. But usually one needs to browse only a small fragment of a Web page to do a step of a transaction (e.g., choosing an item from a search results list). Based on this observation this paper presents a model-directed transaction framework to identify, extract and aurally render only the “relevant” page fragments in each step of a transaction. The framework uses a process model to encode the state of the transaction and a concept model to identify the page fragments relevant for the transaction in that state. We also present algorithms to mine such models from click stream data generated by transactions and experimental evidence of the practical effectiveness of our models in improving user experience when conducting online transactions with non-visual modalities.  相似文献   

5.
Understanding the content of a Web page and navigating within and between pages are crucial tasks for any Web user. To those who are accessing pages through non-visual means, such as screen readers, the challenges offered by these tasks are not easily overcome, even when pages are unchanging documents. The advent of ‘Web 2.0’ and Web applications, however, means that documents often are not static, but update, either automatically or due to user interaction. This development poses a difficult question for screen reader designers: how should users be notified of page changes? In this article we introduce rules for presenting such updates, derived from studies of how sighted users interact with them. An implementation of the rules has been evaluated, showing that users who were blind or visually impaired found updates easier to deal with than the relatively quiet way in which current screen readers often present them.  相似文献   

6.
While progress on assistive technologies have been made, some blind users still face several problems opening and using basic functionalities when interacting with touch interfaces. Sometimes, people with visual impairments may also have problems navigating autonomously, without personal assistance, especially in unknown environments. This paper presents a complete solution to manage the basic functions of a smartphone and to guide users using a wayfinding application. This way, a blind user could go to work from his home in an autonomous way using an adaptable wayfinding application on his smartphone. The wayfinding application combines text, map, auditory and tactile feedback for providing the information. Eighteen visually impaired users tested the application. Preliminary results from this study show that blind people and limited vision users can effectively use the wayfinding application without help. The evaluation also confirms the usefulness of extending the vibration feedback to convey distance information as well as directional information. The validation was successful for iOS and Android devices.  相似文献   

7.
Large displays have become ubiquitous in our everyday lives, but these displays are designed for sighted people.This paper addresses the need for visually impaired people to access targets on large wall-mounted displays. We developed an assistive interface which exploits mid-air gesture input and haptic feedback, and examined its potential for pointing and steering tasks in human computer interaction(HCI). In two experiments, blind and blindfolded users performed target acquisition tasks using mid-air gestures and two different kinds of feedback(i.e., haptic feedback and audio feedback). Our results show that participants perform faster in Fitts' law pointing tasks using the haptic feedback interface rather than the audio feedback interface. Furthermore, a regression analysis between movement time(MT) and the index of difficulty(ID)demonstrates that the Fitts' law model and the steering law model are both effective for the evaluation of assistive interfaces for the blind. Our work and findings will serve as an initial step to assist visually impaired people to easily access required information on large public displays using haptic interfaces.  相似文献   

8.
Large interactive displays have become ubiquitous in our everyday lives, but these displays are designed for the needs of sighted people. In this paper, we specifically address assisting people with visual impairments to aim at a target on a large wall-mounted display. We introduce a novel haptic device, which explores the use of vibrotactile feedback in blind user search strategies on a large wall-mounted display. Using mid-air gestures aided by vibrotactile feedback, we compared three target-aiming techniques: Random (baseline) and two novel techniques – Cruciform and Radial. The results of our two experiments show that visually impaired participants can find a target significantly faster with the Cruciform and Radial techniques than with the Random technique. In addition, they can retrieve information on a large display about twice as fast by augmenting speech feedback with haptic feedback in using the Radial technique. Although a large number of studies have been done on assistive interfaces for people who have visual impairments, very few studies have been done on large vertical display applications for them. In a broader sense, this work will be a stepping-stone for further research on interactive large public display technologies for users who are visually impaired.  相似文献   

9.
One of the major barriers for the social inclusion of blind persons is the limited access to graphics-based learning resources which are highly vision oriented. This paper presents a cost-effective tool which facilitates comprehension and creation of virtual directed graphs, such as flowcharts, using alternate modalities of audio and touch. It provides a physically accessible virtual spatial workspace and multimodal interface to non-visually represent directed graphs in interactive manner. The concept of spatial query is used to aid exploration and mental visualization through audio and tactile feedback. A unique aspect of the tool, named DiGVis, offers compatible representations of directed graphs for the sighted and non-sighted persons. A study with 28 visually challenged subjects indicates that the tool facilitates comprehension of layout and directional connectivity of elements in a virtual diagram. Further, in a pilot study, blind persons could independently comprehend a virtual flowchart layout and its logical steps. They were also able to create the flowchart data without sighted assistance using DiGVis. A comparison with sighted subjects using DiGVis for similar task demonstrates the effectiveness of the technique for inclusive education.  相似文献   

10.
Globally, the number of visually impaired people is large and increasing. Many assistive technologies are being developed to help visually impaired people, because they still have difficulty accessing assistive technologies that have been developed from a technology-driven perspective. This study applied a user-centered perspective to get different and hopefully deeper understanding of the interaction experiences. More specifically, this study focused on identifying the unique interaction experiences of visually impaired people when they use a camera application on a smartphone. Twenty participants conducted usability testing using the retrospective think aloud technique. The unique interaction experiences of visually impaired people with the camera application, and relevant implications for designing assistive technologies were analyzed.Relevance to industryThe considerations for conducting usability testing and the results of this study are expected to contribute to the design and evaluation of new assistive technologies based on smartphones.  相似文献   

11.
12.
This paper provides an overview of a project aimed at using knowledge-based technology to improve accessibility of the Web for visually impaired users. The focus is on the multi-dimensional components of Web pages (tables and frames); our cognitive studies demonstrate that spatial information is essential in comprehending tabular data, and this aspect has been largely overlooked in the existing literature. Our approach addresses these issues by using explicit representations of the navigational semantics of the documents and using a domain-specific language to query the semantic representation and derive navigation strategies. Navigational knowledge is explicitly generated and associated to the tabular and multi-dimensional HTML structures of documents. This semantic representation provides to the blind user an abstract representation of the layout of the document; the user is then allowed to issue commands from the domain-specific language to access and traverse the document according to its abstract layout. Published online: 6 November 2002  相似文献   

13.
D Griffith 《Human factors》1990,32(4):467-475
Suitably adapted computers hold considerable potential for integrating people who are blind or visually impaired into the mainstream. The principal problems that preclude the achievement of this potential are human factors issues. These issues are discussed, and the problems presented by icon-based interfaces are reviewed. An argument is offered that these issues, which ostensibly pertain to the blind or visually impaired user, are fundamental issues confronting all users. There is reason to hope that the benefits of research into the human factors issues of people with vision impairments will also extend to the sighted user.  相似文献   

14.
Although a large amount of research has been conducted on building interfaces for the visually impaired that allows users to read web pages and generate and access information on computers, little development addresses two problems faced by the blind users. First, sighted users can rapidly browse and select information they find useful, and second, sighted users can make much useful information portable through the recent proliferation of personal digital assistants (PDAs). These possibilities are not currently available for blind users. This paper describes an interface that has been built on a standard PDA and allows its user to browse the information stored on it through a combination of screen touches coupled with auditory feedback. The system also supports the storage and management of personal information so that addresses, music, directions, and other supportive information can be readily created and then accessed anytime and anywhere by the PDA user. The paper describes the system along with the related design choices and design rationale. A user study is also reported.  相似文献   

15.

Visually impaired individuals often rely on assistive technologies such as white canes for independent navigation. Many electronic enhancements to the traditional white cane have been proposed. However, only a few of these proof-of-concept technologies have been tested with authentic users, as most studies rely on blindfolded non-visually impaired participants or no testing with participants at all. Experiments involving blind users are usually not contrasted with the traditional white cane. This study set out to compare an ultrasound-based electronic cane with a traditional white cane. Moreover, we also compared the performance of a group of visually impaired participants (N = 10) with a group of blindfolded participants without visual impairments (N = 31). The results show that walking speed with the electronic cane is significantly slower compared to the traditional white cane. Moreover, the results show that the performance of the participants without visual impairments is significantly slower than for the visually impaired participants. No significant differences in obstacle detection rates were observed across participant groups and device types for obstacles on the ground, while 79% of the hanging obstacles were detected by the electronic cane. The results of this study thus suggest that electronic canes present only one advantage over the traditional cane, namely in its ability to detect hanging obstacles, at least without prolonged practice. Next, blindfolded participants are insufficient substitutes for blind participants who are expert cane users. The implication of this study is that research into digital white cane enhancements should include blind participants. These participants should be followed over time in longitudinal experiments to document if practice will lead to improvements that surpass the performance achieved with traditional canes.

  相似文献   

16.
The Google search engine was studied as a Web prototype to be modified and improved for blind users. A Specialized Search Engine for the Blind (SSEB) was developed with an accessible interface and improved functions (searching assistance functions, user-centered functions, and specialized design for the blind). An experiment was conducted with twelve participants, both blind and sighted, to verify the effects of SSEB. The performance was better with the SSEB than with the Google search engine, and the participants also showed higher satisfactions with the SSEB. Interface considerations for designing an accessible Web site for blind users are important. The users of SSEB could in the future be expanded to include most, if not all, visually impaired people, since the World Wide Web and all Internet resources should ideally be accessible to everyone.  相似文献   

17.
18.
视障人士是社会中的弱势群体, 独立出行面临重重障碍. 为视障人士提供安全可靠的辅助设备体现了社会文明的进步. 介绍了辅助视障出行有关的障碍物检测识别关键技术和路径规划相关算法. 重点对障碍物检测之后的路径规划算法进行分析, 综合对比各种技术的应用特点及场景并讨论了相关方法在视障辅助设备中的研究进展. 总结了多技术融合使用在智能辅助设备中的应用现状. 在此基础上, 结合人工智能及嵌入式设备等技术的进步展望了未来辅助视障出行设备的发展方向.  相似文献   

19.
People with visual disabilities face many difficulties and barriers when using computers and the Internet. Such people need the help of IT developers to create adaptive technologies that facilitate their interaction with the computers and Internet. This paper presents the design and implementation of an Arabic Braille environment (ABE). The paper also exposes to the reader the ABE's functionality and unique features. The ABE is designed to facilitate Arabic‐speaking visually impaired people interaction with computers, as well as helping sighted users to communicate with the visually impaired. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

20.
Touch-based interaction is becoming increasingly popular and is commonly used as the main interaction paradigm for self-service kiosks in public spaces. Touch-based interaction is known to be visually intensive, and current non-haptic touch-display technologies are often criticized as excluding blind users. This study set out to demonstrate that touch-based kiosks can be designed to include blind users without compromising the user experience for non-blind users. Most touch-based kiosks are based on absolute positioned virtual buttons which are difficult to locate without any tactile, audible or visual cues. However, simple stroke gestures rely on relative movements and the user does not need to hit a target at a specific location on the display. In this study, a touch-based train ticket sales kiosk based on simple stroke gestures was developed and tested on a panel of blind and visually impaired users, a panel of blindfolded non-visually impaired users and a control group of non-visually impaired users. The tests demonstrate that all the participants managed to discover, learn and use the touch-based self-service terminal and complete a ticket purchasing task. The majority of the participants completed the task in less than 4?min on the first attempt.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号