首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper, concepts of knowledge granulation, knowledge entropy and knowledge uncertainty measure are given in ordered information systems, and some important properties of them are investigated. From these properties, it can be shown that these measures provides important approaches to measuring the discernibility ability of different knowledge in ordered information systems. And relationship between knowledge granulation, knowledge entropy and knowledge uncertainty measure are considered. As an application of knowledge granulation, we introduce definition of rough entropy of rough sets in ordered information systems. By an example, it is shown that the rough entropy of rough sets is more accurate than classical rough degree to measure the roughness of rough sets in ordered information systems.  相似文献   

2.
Rough set theory is a relatively new mathematical tool for use in computer applications in circumstances that are characterized by vagueness and uncertainty. Rough set theory uses a table called an information system, and knowledge is defined as classifications of an information system. In this paper, we introduce the concepts of information entropy, rough entropy, knowledge granulation and granularity measure in incomplete information systems, their important properties are given, and the relationships among these concepts are established. The relationship between the information entropy E(A) and the knowledge granulation GK(A) of knowledge A can be expressed as E(A)+GK(A) = 1, the relationship between the granularity measure G(A) and the rough entropy E r(A) of knowledge A can be expressed as G(A)+E r(A) = log2|U|. The conclusions in Liang and Shi (2004 Liang, J.Y. and Shi, Z.Z. 2004. The information entropy, rough entropy and knowledge granulation in rough set theory. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, 12(1): 3746. [Crossref], [Web of Science ®] [Google Scholar]) are special instances in this paper. Furthermore, two inequalities ? log2 GK(A) ≤ G(A) and E r(A) ≤ log2(|U|(1 ? E(A))) about the measures GK, G, E and E r are obtained. These results will be very helpful for understanding the essence of uncertainty measurement, the significance of an attribute, constructing the heuristic function in a heuristic reduct algorithm and measuring the quality of a decision rule in incomplete information systems.  相似文献   

3.
As two classical measures, approximation accuracy and consistency degree can be employed to evaluate the decision performance of a decision table. However, these two measures cannot give elaborate depictions of the certainty and consistency of a decision table when their values are equal to zero. To overcome this shortcoming, we first classify decision tables in rough set theory into three types according to their consistency and introduce three new measures for evaluating the decision performance of a decision-rule set extracted from a decision table. We then analyze how each of these three measures depends on the condition granulation and decision granulation of each of the three types of decision tables. Experimental analyses on three practical data sets show that the three new measures appear to be well suited for evaluating the decision performance of a decision-rule set and are much better than the two classical measures.  相似文献   

4.
5.
粗糙集的不确定性度量是粗糙集理论中一项重要的数值特征,而Z.Pawlak提出的粗糙集的不确定性度量,即传统的近似精度与粗糙度具有局限性。考虑导致粗集粗糙性的原因,将传统的粗糙度与知识的含量测度结合起来,提出了一种新的粗糙集不确定性的度量方法,讨论了这一度量的特性,通过实例说明这一新的度量方法的合理性及计算的简便性。  相似文献   

6.
胡善忠  徐怡  何明慧  王冉 《计算机应用》2017,37(12):3391-3396
针对已有多粒度粗糙集粒度约简算法效率较低的问题,提出一种多粒度粗糙集粒度约简的高效算法(EAGRMRS)。首先,以决策信息系统为对象,定义决策类下近似布尔矩阵,该矩阵能够将粒度约简过程中过多且有重复的集合运算转换为布尔运算,基于该矩阵给出计算决策类下近似算法和计算粒度重要度算法。然后,针对计算粒度重要度时存在冗余计算的问题,提出粒度动态增加时快速计算粒度重要度的算法,并在此基础上,提出EAGRMRS,该算法的时间复杂度为O(|A|·|U|2+|A|2·|U|),其中|A|表示粒度集合大小,|U|表示决策信息系统中实例数。在UCI数据集上的实验结果验证了所提算法的有效性和高效性,并且随着数据集的增大,EAGRMRS相较于多粒度粗糙集粒度约简的启发式算法(HAGSS)效率优势更加明显。  相似文献   

7.
Generalized rough sets based on relations   总被引:3,自引:0,他引:3  
William Zhu 《Information Sciences》2007,177(22):4997-5011
Rough set theory has been proposed by Pawlak as a tool for dealing with the vagueness and granularity in information systems. The core concepts of classical rough sets are lower and upper approximations based on equivalence relations. This paper studies arbitrary binary relation based generalized rough sets. In this setting, a binary relation can generate a lower approximation operation and an upper approximation operation, but some of common properties of classical lower and upper approximation operations are no longer satisfied. We investigate conditions for a relation under which these properties hold for the relation based lower and upper approximation operations.This paper also explores the relationships between the lower or the upper approximation operation generated by the intersection of two binary relations and those generated by these two binary relations, respectively. Through these relationships, we prove that two different binary relations will certainly generate two different lower approximation operations and two different upper approximation operations.  相似文献   

8.
Generalized rough sets based on reflexive and transitive relations   总被引:1,自引:0,他引:1  
In this paper, we investigate the relationship between generalized rough sets induced by reflexive and transitive relations and the topologies on the universe which is not restricted to be finite. It is proved that there exists a one-to-one correspondence between the set of all reflexive and transitive relations and the set of all topologies which satisfy a certain kind of compactness condition.  相似文献   

9.
基于双论域粗糙集构造了定义在L模糊关系上的双论域L模糊粗糙集,并讨论了双论域L模糊粗糙集基本性质。  相似文献   

10.
基于粗集的多知识库决策融合   总被引:4,自引:0,他引:4  
粗糙集理论研究的重要内容是分类与约筒,其目的在于获取优良的规则知识,实现准确的决策.通过提出规则集合的决策度量思想,从整体上体现了对一个规则集合的衡量,为基于多知识库的决策奠定了基础.基于模型集成的基本思想,将规则知识库作为一个决策模型,根据规则集度量选择模型,通过模型集成实现决策融合.  相似文献   

11.
This paper presents a discussion on rough set theory from the textural point of view. A texturing is a family of subsets of a given universal set U satisfying certain conditions which are generally basic properties of the power set. The suitable morphisms between texture spaces are given by direlations defined as pairs (r,R) where r is a relation and R is a corelation. It is observed that the presections are natural generalizations for rough sets; more precisely, if (r,R) is a complemented direlation, then the inverse of the relation r (the corelation R) is actually a lower approximation operator (an upper approximation operator).  相似文献   

12.
The generalizations of rough sets considered with respect to similarity relation, covers and fuzzy relations, are main research topics of rough set theory. However, these generalizations have shown less connection among each other and have not been brought into a unified framework, which has limited the in-depth research and application of rough set theory. In this paper the complete completely distributive (CCD) lattice is selected as the mathematical foundation on which definitions of lower and upper approximations that form the basic concepts of rough set theory are proposed. These definitions result from the concept of cover introduced on a CCD lattice and improve the approximations of the existing crisp generalizations of rough sets with respect to similarity relation and covers. When T-similarity relation is considered, the existing fuzzy rough sets are the special cases of our proposed approximations on a CCD lattice. Thus these generalizations of rough sets are brought into a unified framework, and a wider mathematical foundation for rough set theory is established.  相似文献   

13.
Rough set theory, initiated by Pawlak, is a mathematical tool in dealing with inexact and incomplete information. Various types of uncertainty measure such as accuracy measure, roughness measure, etc, which aim to quantify the imprecision of a rough set caused by its boundary region, have been extensively studied in the existing literatures. However, a few of these uncertainty measures are explored from the viewpoint of formal rough set theory, which, however, help to develop a kind of graded reasoning model in the framework of rough logic. To solve such a problem, a framework of uncertainty measure for formulae in rough logic is presented in this paper. Unlike the existing literatures, we adopt an axiomatic approach to study the uncertainty measure in rough logic, concretely, we define the notion of rough truth degree by some axioms, such a notion is demonstrated to be adequate for measuring the extent to which any formula is roughly true. Then based on this fundamental notion, the notions of rough accuracy degree, roughness degree for any formula, and rough inclusion degree, rough similarity degree between any two formulae are also proposed. In addition, their properties are investigated in detail. These obtained results will be used to develop an approximate reasoning model in the framework of rough logic from the axiomatic viewpoint.  相似文献   

14.
In the stock market, technical analysis is a useful method for predicting stock prices. Although, professional stock analysts and fund managers usually make subjective judgments, based on objective technical indicators, it is difficult for non-professionals to apply this forecasting technique because there are too many complex technical indicators to be considered. Moreover, two drawbacks have been found in many of the past forecasting models: (1) statistical assumptions about variables are required for time series models, such as the autoregressive moving average model (ARMA) and the autoregressive conditional heteroscedasticity (ARCH), to produce forecasting models of mathematical equations, and these are not easily understood by stock investors; and (2) the rules mined from some artificial intelligence (AI) algorithms, such as neural networks (NN), are not easily realized.In order to overcome these drawbacks, this paper proposes a hybrid forecasting model, using multi-technical indicators to predict stock price trends. Further, it includes four proposed procedures in the hybrid model to provide efficient rules for forecasting, which are evolved from the extracted rules with high support value, by using the toolset based on rough sets theory (RST): (1) select the essential technical indicators, which are highly related to the future stock price, from the popular indicators based on a correlation matrix; (2) use the cumulative probability distribution approach (CDPA) and minimize the entropy principle approach (MEPA) to partition technical indicator value and daily price fluctuation into linguistic values, based on the characteristics of the data distribution; (3) employ a RST algorithm to extract linguistic rules from the linguistic technical indicator dataset; and (4) utilize genetic algorithms (GAs) to refine the extracted rules to get better forecasting accuracy and stock return. The effectiveness of the proposed model is verified with two types of performance evaluations, accuracy and stock return, and by using a six-year period of the TAIEX (Taiwan Stock Exchange Capitalization Weighted Stock Index) as the experiment dataset. The experimental results show that the proposed model is superior to the two listed forecasting models (RST and GAs) in terms of accuracy, and the stock return evaluations have revealed that the profits produced by the proposed model are higher than the three listed models (Buy-and-Hold, RST and GAs).  相似文献   

15.
经典的粗糙集理论刻画目标概念运用静态的粒度分析,不便于刻画人们问题求解的动态认知过程。已有文献分别用正向近似和逆向近似对目标概念和目标决策进行刻画,并成功地应用于分层聚类算法和规则提取方面。基于动态粒度原理,提出双向近似的概念,获得双向近似的一些重要性质,并将其应用于决策表中决策规则的获取。  相似文献   

16.
This paper studies the classes of rough sets and fuzzy rough sets. We discuss the invertible lower and upper approximations and present the necessary and sufficient conditions for the lower approximation to coincide with the upper approximation in both rough sets and fuzzy rough sets. We also study the mathematical properties of a fuzzy rough set induced by a cyclic fuzzy relation.  相似文献   

17.
提出一种基于粗糙集的近似质量求取属性约简的算法。该算法以集合近似的质量为迭代准则,以所有条件属性为初始约简集合,通过逐步缩减来求取约简,保证了所求取的约简对问题的分类量力不会减弱。同时给出了该算法的时间复杂度分析,并举例验证了所提出算法的有效性和实用性。  相似文献   

18.
Intuitionistic fuzzy rough sets: at the crossroads of imperfect knowledge   总被引:5,自引:0,他引:5  
Abstract: Just like rough set theory, fuzzy set theory addresses the topic of dealing with imperfect knowledge. Recent investigations have shown how both theories can be combined into a more flexible, more expressive framework for modelling and processing incomplete information in information systems. At the same time, intuitionistic fuzzy sets have been proposed as an attractive extension of fuzzy sets, enriching the latter with extra features to represent uncertainty (on top of vagueness). Unfortunately, the various tentative definitions of the concept of an ‘intuitionistic fuzzy rough set’ that were raised in their wake are a far cry from the original objectives of rough set theory. We intend to fill an obvious gap by introducing a new definition of intuitionistic fuzzy rough sets, as the most natural generalization of Pawlak's original concept of rough sets.  相似文献   

19.
In the present paper, we investigate the three types of Yao’s lower and upper approximations of any set with respect to any similarity relation. These types based on a right neighborhood. Also, we define and investigate other three types of approximations for any similarity relations. These new types based on the intersection of the right neighborhoods. Moreover, we give a comparison between these types. Lastly, the relationship between these definitions is introduced.  相似文献   

20.
Learning rules from incomplete training examples by rough sets   总被引:1,自引:0,他引:1  
Machine learning can extract desired knowledge from existing training examples and ease the development bottleneck in building expert systems. Most learning approaches derive rules from complete data sets. If some attribute values are unknown in a data set, it is called incomplete. Learning from incomplete data sets is usually more difficult than learning from complete data sets. In the past, the rough-set theory was widely used in dealing with data classification problems. In this paper, we deal with the problem of producing a set of certain and possible rules from incomplete data sets based on rough sets. A new learning algorithm is proposed, which can simultaneously derive rules from incomplete data sets and estimate the missing values in the learning process. Unknown values are first assumed to be any possible values and are gradually refined according to the incomplete lower and upper approximations derived from the given training examples. The examples and the approximations then interact on each other to derive certain and possible rules and to estimate appropriate unknown values. The rules derived can then serve as knowledge concerning the incomplete data set.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号