首页 | 官方网站   微博 | 高级检索  
     


Sign Language to Sentence Formation: A Real Time Solution for Deaf People
Authors:Muhammad Sanaullah  Muhammad Kashif  Babar Ahmad  Tauqeer Safdar  Mehdi Hassan  Mohd Hilmi Hasan  Amir Haider
Affiliation:1.Bahauddin Zakariya University, Department of Computer Science, Multan, 60,000, Pakistan2 Air University, Department of Computer Science, Multan, 60,000, Pakistan3 Air University, Department of Computer Science, Islamabad, 44,000, Pakistan4 Centre for Research in Data Science, Department of Computer and Information Sciences, Universiti Teknologi PETRONAS, Seri Iskandar, 32610, Perak, Malaysia5 Department of Intelligent Mechatronics Engineering, Sejong University, Seoul, 05006, Korea
Abstract:Communication is a basic need of every human being to exchange thoughts and interact with the society. Acute peoples usually confab through different spoken languages, whereas deaf people cannot do so. Therefore, the Sign Language (SL) is the communication medium of such people for their conversation and interaction with the society. The SL is expressed in terms of specific gesture for every word and a gesture is consisted in a sequence of performed signs. The acute people normally observe these signs to understand the difference between single and multiple gestures for singular and plural words respectively. The signs for singular words such as I, eat, drink, home are unalike the plural words as school, cars, players. A special training is required to gain the sufficient knowledge and practice so that people can differentiate and understand every gesture/sign appropriately. Innumerable researches have been performed to articulate the computer-based solution to understand the single gesture with the help of a single hand enumeration. The complete understanding of such communications are possible only with the help of this differentiation of gestures in computer-based solution of SL to cope with the real world environment. Hence, there is still a demand for specific environment to automate such a communication solution to interact with such type of special people. This research focuses on facilitating the deaf community by capturing the gestures in video format and then mapping and differentiating as single or multiple gestures used in words. Finally, these are converted into the respective words/sentences within a reasonable time. This provide a real time solution for the deaf people to communicate and interact with the society.
Keywords:Sign language  machine learning  conventional neural network  image processing  deaf community
点击此处可从《》浏览原始摘要信息
点击此处可从《》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号