首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Two objects are independent if they do not affect each other. Independence is well-understood in classical information theory, but less in algorithmic information theory. Working in the framework of algorithmic information theory, the paper proposes two types of independence for arbitrary infinite binary sequences and studies their properties. Our two proposed notions of independence have some of the intuitive properties that one naturally expects. For example, for every sequence x, the set of sequences that are independent with x has measure one. For both notions of independence we investigate to what extent pairs of independent sequences, can be effectively constructed via Turing reductions (from one or more input sequences). In this respect, we prove several impossibility results. For example, it is shown that there is no effective way of producing from an arbitrary sequence with positive constructive Hausdorff dimension two sequences that are independent (even in the weaker type of independence) and have super-logarithmic complexity. Finally, a few conjectures and open questions are discussed.  相似文献   

2.
In the theory of algorithmic randomness, one of the central notions is that of computable randomness. An infinite binary sequence?X is computably random if no recursive martingale (strategy) can win an infinite amount of money by betting on the values of the bits of?X. In the classical model, the martingales considered are real-valued, that is, the bets made by the martingale can be arbitrary real numbers. In this paper, we investigate a more restricted model, where only integer-valued martingales are considered, and we study the class of random sequences induced by this model.  相似文献   

3.
We explore in this paper the efficient clustering of market-basket data. Different from those of the traditional data, the features of market-basket data are known to be of high dimensionality and sparsity. Without explicitly considering the presence of the taxonomy, most prior efforts on clustering market-basket data can be viewed as dealing with items in the leaf level of the taxonomy tree. Clustering transactions across different levels of the taxonomy is of great importance for marketing strategies as well as for the result representation of the clustering techniques for market-basket data. In view of the features of market-basket data, we devise in this paper a novel measurement, called the category-based adherence, and utilize this measurement to perform the clustering. With this category-based adherence measurement, we develop an efficient clustering algorithm, called algorithm k-todes, for market-basket data with the objective to minimize the category-based adherence. The distance of an item to a given cluster is defined as the number of links between this item and its nearest tode. The category-based adherence of a transaction to a cluster is then defined as the average distance of the items in this transaction to that cluster. A validation model based on information gain is also devised to assess the quality of clustering for market-basket data. As validated by both real and synthetic datasets, it is shown by our experimental results, with the taxonomy information, algorithm k-todes devised in this paper significantly outperforms the prior works in both the execution efficiency and the clustering quality as measured by information gain, indicating the usefulness of category-based adherence in market-basket data clustering.  相似文献   

4.
5.
6.
This paper examines attribute dependencies in data that involve grades, such as a grade to which an object is red or a grade to which two objects are similar. We thus extend the classical agenda by allowing graded, or “fuzzy”, attributes instead of Boolean, yes-or-no attributes in case of attribute implications, and allowing approximate match based on degrees of similarity instead of exact match based on equality in case of functional dependencies. In a sense, we move from bivalence, inherently present in the now-available theories of dependencies, to a more flexible setting that involves grades. Such a shift has far-reaching consequences. We argue that a reasonable theory of dependencies may be developed by making use of mathematical fuzzy logic, a recently developed many-valued logic. Namely, the theory of dependencies is then based on a solid logic calculus the same way classical dependencies are based on classical logic. For instance, rather than handling degrees of similarity in an ad hoc manner, we consistently treat them as truth values, the same way as true (match) and false (mismatch) are treated in classical theories. In addition, several notions intuitively embraced in the presence of grades, such as a degree of validity of a particular dependence or a degree of entailment, naturally emerge and receive a conceptually clean treatment in the presented approach. In the first part of this two-part paper, we discuss motivations, provide basic notions of syntax and semantics and develop basic results which include entailment of dependencies, associated closure structures and a logic of dependencies with two versions of completeness theorem.  相似文献   

7.
In this paper, we study further the filter theory of residuated lattices. First, we discuss the concepts of filters and normal filters of residuated lattices and propose some equivalent conditions for them. Then we introduce and investigate the notions of v-filter and normal v-filter of a residuated lattice with a weak vt-operator and lay bare the formulas for calculating the v-filters and the normal v-filters generated by subsets. Finally we show that the lattices of v-filters and normal v-filters of a residuated lattice with a vt-operator are both complete Brouwerian lattices.  相似文献   

8.
Offsetting is an important operation in computer aided design, with applications also in other contexts like robot path planning or tolerance analysis. In this paper we study the local behavior of an algebraic curve under a variation of the usual offsetting construction, namely the generalized offsetting process (Sendra and Sendra, 2000a). More precisely, here we discuss when and how this geometric construction may cause local changes in the shape of an algebraic curve, and we compare our results with those obtained for the case of classical offsets (Alcazar and Sendra, 2007). For these purposes, we use well-known notions of Differential Geometry, and also the notion of local shape introduced in Alcazar and Sendra (2007). Our analysis shows important differences between the topological properties of classical and generalized offsets, both at regular and singular points.  相似文献   

9.
Recall that an integration rule is said to have a trigonometric degree of exactness m if it integrates exactly all trigonometric polynomials of degree ≤ m. In this paper we focus on high dimensions, say, d ? 6. We introduce three notions of weighted degree of exactness, where we use weights to characterize the anisotropicness of the integrand with respect to successive coordinate directions. Unlike in the classical unweighted setting, the minimal number of integration points needed to achieve a prescribed weighted degree of exactness no longer grows exponentially with d provided that the weights decay sufficiently fast. We present a component-by-component algorithm for the construction of a rank-1 lattice rule such that (i) it has a prescribed weighted degree of exactness, and (ii) its worst case error achieves the optimal rate of convergence in a weighted Korobov space. Then we introduce a modified, more practical, version of this algorithm which maximizes the weighted degree of exactness in each step of the construction. Both algorithms are illustrated by numerical results.  相似文献   

10.

The distribution of documents over two classes in binary text categorization problem is generally uneven where resampling approaches are shown to improve F 1 scores. The improvement achieved is mainly due to the gain in recall where precision may deteriorate. Since precision is the primary concern in some applications, achieving higher F 1 scores with a desired level of trade-off between precision and recall is important. In this study, we present an analytical comparison between unanimity and majority voting rules. It is shown that unanimity rule can provide better F 1 scores compared to majority voting when an ensemble of high recall but low precision classifiers is considered. Then, category-based undersampling is proposed to generate high recall members. The experiments conducted on three datasets have shown that superior F 1 scores can be realized compared to the support vector machines(SVM)-based baseline system and voting over a random undersampling-based ensemble.

  相似文献   

11.
Multi-level conceptual modeling addresses the representation of subject domains dealing explicitly with multiple classification levels. Despite the recent advances in multi-level modeling techniques, we believe that the literature in multi-level conceptual modeling would benefit from a theory that: (1) formally characterizes the nature of classification levels and (2) precisely defines the structural relations that may occur between elements of different classification levels. This work aims to fill this gap by proposing an axiomatic theory that can be considered a reference top-level ontology for types in multi-level conceptual modeling. The theory provides the modeler with basic concepts and patterns to articulate domains that require multiple levels of classification as well as to inform the development of well-founded languages for multi-level conceptual modeling. The whole theory is founded on a basic instantiation relation and characterizes the concepts of individuals and types, with types organized in levels related by instantiation. Further, it includes intra-level structural relations that are used to define expressive multi-level models and cross-level relations that allow us to account for and incorporate the different notions of power type in the literature.  相似文献   

12.
《Computers & Structures》2006,84(17-18):1125-1133
A fundamental Lagrangian local strain variable for continuous media undergoing large strains, consistent with the classical Eulerian strain rate, is proposed, based on a rigorous definition of Eulerian tensors. This variable, named metric variable, is not a tensor and its domain of variation is a curved manifold. With these premises, we build an intrinsic Lagrangian framework for the statement of constitutive laws, which casts a new, unifying light on geometric aspects of large-strain theory. This approach is always completely consistent with the Eulerian approach. In the obtained new theoretical framework, the relation between this new theory and various classical approaches is studied. We examine the case of the new logarithmic rate.  相似文献   

13.
This note serves three purposes: (i) we provide a self-contained exposition of the fact that conjunctive queries are not efficiently learnable in the Probably-Approximately-Correct (PAC) model, paying clear attention to the complicating fact that this concept class lacks the polynomial-size fitting property, a property that is tacitly assumed in much of the computational learning theory literature; (ii) we establish a strong negative PAC learnability result that applies to many restricted classes of conjunctive queries (CQs), including acyclic CQs for a wide range of notions of acyclicity; (iii) we show that CQs (and UCQs) are efficiently PAC learnable with membership queries.  相似文献   

14.
In order to solve ordinary differential equations, we use an equation with the so-called logarithmic derivative. Similar equations with a linear operator permit us to define logarithmic and antilogarithmic mappings and obtain some results unknown before for the classical derivation operator. This method and its applications are exposed in detail in the author's book [1] and may be treated as an introduction into the 21st century logarithmo-technia (according to the original meaning of this word).In the first section, there are given basic notions and facts of algebraic analysis (without proofs). The second section consists of the most important definitions and theorems (without proofs) concerning logarithms and antilogarithms. The third section is concerned with properties of multiplicative true shifts. In the last section is given a generalization of the binomial theorem for harmonic logarithms.  相似文献   

15.
Probabilistic Default Reasoning with Conditional Constraints   总被引:1,自引:0,他引:1  
We present an approach to reasoning from statistical and subjective knowledge, which is based on a combination of probabilistic reasoning from conditional constraints with approaches to default reasoning from conditional knowledge bases. More precisely, we introduce the notions of z-, lexicographic, and conditional entailment for conditional constraints, which are probabilistic generalizations of Pearl's entailment in system Z, Lehmann's lexicographic entailment, and Geffner's conditional entailment, respectively. We show that the new formalisms have nice properties. In particular, they show a similar behavior as reference-class reasoning in a number of uncontroversial examples. The new formalisms, however, also avoid many drawbacks of reference-class reasoning. More precisely, they can handle complex scenarios and even purely probabilistic subjective knowledge as input. Moreover, conclusions are drawn in a global way from all the available knowledge as a whole. We then show that the new formalisms also have nice general nonmonotonic properties. In detail, the new notions of z-, lexicographic, and conditional entailment have similar properties as their classical counterparts. In particular, they all satisfy the rationality postulates proposed by Kraus, Lehmann, and Magidor, and they have some general irrelevance and direct inference properties. Moreover, the new notions of z- and lexicographic entailment satisfy the property of rational monotonicity. Furthermore, the new notions of z-, lexicographic, and conditional entailment are proper generalizations of both their classical counterparts and the classical notion of logical entailment for conditional constraints. Finally, we provide algorithms for reasoning under the new formalisms, and we analyze its computational complexity.  相似文献   

16.
MGRS: A multi-granulation rough set   总被引:4,自引:0,他引:4  
The original rough set model was developed by Pawlak, which is mainly concerned with the approximation of sets described by a single binary relation on the universe. In the view of granular computing, the classical rough set theory is established through a single granulation. This paper extends Pawlak’s rough set model to a multi-granulation rough set model (MGRS), where the set approximations are defined by using multi equivalence relations on the universe. A number of important properties of MGRS are obtained. It is shown that some of the properties of Pawlak’s rough set theory are special instances of those of MGRS.Moreover, several important measures, such as accuracy measureα, quality of approximationγ and precision of approximationπ, are presented, which are re-interpreted in terms of a classic measure based on sets, the Marczewski-Steinhaus metric and the inclusion degree measure. A concept of approximation reduct is introduced to describe the smallest attribute subset that preserves the lower approximation and upper approximation of all decision classes in MGRS as well. Finally, we discuss how to extract decision rules using MGRS. Unlike the decision rules (“AND” rules) from Pawlak’s rough set model, the form of decision rules in MGRS is “OR”. Several pivotal algorithms are also designed, which are helpful for applying this theory to practical issues. The multi-granulation rough set model provides an effective approach for problem solving in the context of multi granulations.  相似文献   

17.

The aim of this paper is to examine the conclusions drawn by Osherson and Smith ["On the adequacy of prototype theory as a theory of concepts", Cognition 9 (1981), pp. 35-58] concerning the inadequacy of the apparatus of fuzzy set theory to represent concepts. Since Osherson and Smith derive their conclusions from specific examples, we show for each of these examples that the respective conclusion they arrive at is not warranted. That is, we demonstrate that fuzzy set theory is sufficiently expressive to represent the various strong intuitions and experimental evidence regarding the relation between simple and combined concepts that are described by Osherson and Smith. To pursue our arguments, we introduce a few relevant notions of fuzzy set theory.  相似文献   

18.
19.
This paper investigates formal notions of computation in nonstandard models of the weak arithmetic theory ET—the theory of exponential time. It is shown that ET is sufficiently weak that many of the natural notions of computation are not preserved. A slightly richer theory ET(Elem) is introduced and it is shown that all sets that have infinitely many easily decidable initial segments are easily decidable in certain nonstandard models of this theory.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号