首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到3条相似文献,搜索用时 0 毫秒
1.
基于粗糙集信息观的决策表属性约简方法   总被引:2,自引:0,他引:2  
粗糙集理论是近年来发展起来的一种有效的处理不精确、不确定、含糊信息的数学理论方法,它被广泛应用于相容和不相容决策表的属性约简和核属性计算。利用反例指出目前基于粗糙集信息观[2、6]的决策表属性约简和核属性计算方法的局限性。对决策表的性质作了深入的研究,研究发现文献[2、6]方法的不足原因是:它们没有考虑U/ind(C)中等价类的相客性。给出了基于U/ind(C)中等价类相客性的属性约简定义和核属性定义,并给出了一种新的基于粗糙集信息观的决策表属性约简和核属性计算方法。讨论了该方法同文献[2、6]方法的区别。最后用相同实例验证了该方法的有效性。  相似文献   

2.
Evaluation techniques play an important role while picking a suitable segmentation scheme out of a number of alternatives. In this paper, a novel supervised segmentation evaluation scheme is proposed that is designed by combining segment area and boundary information. Using the evaluation metric, a ranking of the popular segmentation algorithms is carried out. A comparative analysis with existing supervised metrics that are commonly used for grading segmentation schemes is performed. Experimental results indicate that the performance of the proposed measure is promising.  相似文献   

3.
In this paper, we propose a stratified gesture recognition method that integrates rough set theory with the longest common subsequence method to classify free-air gestures, for natural human–computer interaction. Gesture vocabularies are often composed of gestures that are highly correlated or comprise gestures that are a proper part of others. This reduces the accuracy of most classifiers if no further actions are taken. In this paper, gestures are encoded in orientation segments which facilitate their analysis and reduce the processing time. To improve the accuracy of gesture recognition on ambiguous gestures, we generate rough set decision tables conditioned on the longest common subsequences; the decision tables store discriminative information on ambiguous gestures. We efficiently perform stratified gesture recognition in two steps: first a gesture is classified in its equivalence class, under a predefined rough set indiscernibility, and then it is recognized using the normalized longest common subsequence paired with rough set decision tables. Experimental results show an improvement of the recognition rate of the longest common subsequence; on preisolated gestures, we achieve an improvement of 6.06% and 15.09%, and on stream gestures 19.79% and 28.4% on digit and alphabet gesture vocabularies, respectively.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号