首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The present paper aims at showing that there are times when set theoretical knowledge increases in a non-cumulative way. In other words, what we call ‘set theory’ is not one theory which grows by simple addition of a theorem after the other, but a finite sequence of theories T 1, ..., T n in which T i+1, for 1 ≤ i < n, supersedes T i . This thesis has a great philosophical significance because it implies that there is a sense in which mathematical theories, like the theories belonging to the empirical sciences, are fallible and that, consequently, mathematical knowledge has a quasi-empirical nature. The way I have chosen to provide evidence in favour of the correctness of the main thesis of this article consists in arguing that Cantor–Zermelo set theory is a Lakatosian Mathematical Research Programme (MRP).  相似文献   

2.
A k-dissimilarity D on a finite set X, |X|????k, is a map from the set of size k subsets of X to the real numbers. Such maps naturally arise from edgeweighted trees T with leaf-set X: Given a subset Y of X of size k, D(Y ) is defined to be the total length of the smallest subtree of T with leaf-set Y . In case k?=?2, it is well-known that 2-dissimilarities arising in this way can be characterized by the so-called ??4-point condition??. However, in case k?>?2 Pachter and Speyer (2004) recently posed the following question: Given an arbitrary k-dissimilarity, how do we test whether this map comes from a tree? In this paper, we provide an answer to this question, showing that for k????3 a k-dissimilarity on a set X arises from a tree if and only if its restriction to every 2?k-element subset of X arises from some tree, and that 2?k is the least possible subset size to ensure that this is the case. As a corollary, we show that there exists a polynomial-time algorithm to determine when a k-dissimilarity arises from a tree. We also give a 6-point condition for determining when a 3-dissimilarity arises from a tree, that is similar to the aforementioned 4-point condition.  相似文献   

3.
Canonical Variate Analysis (CVA) is one of the most useful of multivariate methods. It is concerned with separating between and within group variation among N samples from K populations with respect to p measured variables. Mahalanobis distance between the K group means can be represented as points in a (K - 1) dimensional space and approximated in a smaller space, with the variables shown as calibrated biplot axes. Within group variation may also be shown, together with circular confidence regions and other convex prediction regions, which may be used to discriminate new samples. This type of representation extends to what we term Analysis of Distance (AoD), whenever a Euclidean inter-sample distance is defined. Although the N × N distance matrix of the samples, which may be large, is required, eigenvalue calculations are needed only for the much smaller K × K matrix of distances between group centroids. All the ancillary information that is attached to a CVA analysis is available in an AoD analysis. We outline the theory and the R programs we developed to implement AoD by presenting two examples.  相似文献   

4.
In this philosophical paper, we explore computational and biological analogies to address the fine-tuning problem in cosmology. We first clarify what it means for physical constants or initial conditions to be fine-tuned. We review important distinctions such as the dimensionless and dimensional physical constants, and the classification of constants proposed by Lévy-Leblond. Then we explore how two great analogies, computational and biological, can give new insights into our problem. This paper includes a preliminary study to examine the two analogies. Importantly, analogies are both useful and fundamental cognitive tools, but can also be misused or misinterpreted. The idea that our universe might be modelled as a computational entity is analysed, and we discuss the distinction between physical laws and initial conditions using algorithmic information theory. Smolin introduced the theory of “Cosmological Natural Selection” with a biological analogy in mind. We examine an extension of this analogy involving intelligent life. We discuss if and how this extension could be legitimated.  相似文献   

5.
Why Axiomatize?     
Axiomatization is uncommon outside mathematics, partly for being often viewed as embalming, partly because the best-known axiomatizations have serious shortcomings, and partly because it has had only one eminent champion, namely David Hilbert (Math Ann 78:405–415, 1918). The aims of this paper are (a) to describe what will be called dual axiomatics, for it concerns not just the formalism, but also the meaning (reference and sense) of the key concepts; and (b) to suggest that every instance of dual axiomatics presupposes some philosophical view or other. To illustrate these points, a theory of solidarity will be crafted and axiomatized, and certain controversies in both classical and quantum physics, as well as in the philosophy of mind, will be briefly discussed. The upshot of this paper is that dual axiomatics, unlike the purely formal axiomatics favored by the structuralists school, is not a luxury but a tool helping resolve some scientific controversies.  相似文献   

6.
Ontological Frameworks for Scientific Theories   总被引:1,自引:0,他引:1  
A close examination of the literature on ontology may strike one with roughly two distinct senses of this word. According to the first of them, which we shall call traditional ontology, ontology is characterized as the a priori study of various “ontological categories”. In a second sense, which may be called naturalized ontology, ontology relies on our best scientific theories and from them it tries to derive the ultimate furniture of the world. From a methodological point of view these two senses of ontology are very far away. Here, we discuss a possible relationship between these senses and argue that they may be made compatible and complement each other. We also examine how logic, understood as a linguistic device dealing with the conceptual framework of a theory and its basic inference patterns must be taken into account in this kind of study. The idea guiding our proposal may be put as follows: naturalized ontology checks for the applicability of the ontological categories proposed by traditional ontology and give substantial feedback for it. The adequate expression of some of the resulting ontological frameworks may require a different logic. We conclude with a discussion of the case of orthodox quantum mechanics, arguing that this theory exemplifies the kind of relationship between the two senses of ontology. We also argue that the view proposed here may throw some light in ontological questions concerning this theory.  相似文献   

7.
In this paper we propose the concept of structural similarity as a relaxation of blockmodeling in social network analysis. Most previous approaches attempt to relax the constraints on partitions, for instance, that of being a structural or regular equivalence to being approximately structural or regular, respectively. In contrast, our approach is to relax the partitions themselves: structural similarities yield similarity values instead of equivalence or non-equivalence of actors, while strictly obeying the requirement made for exact regular equivalences. Structural similarities are based on a vector space interpretation and yield efficient spectral methods that, in a more restrictive manner, have been successfully applied to difficult combinatorial problems such as graph coloring. While traditional blockmodeling approaches have to rely on local search heuristics, our framework yields algorithms that are provably optimal for specific data-generation models. Furthermore, the stability of structural similarities can be well characterized making them suitable for the analysis of noisy or dynamically changing network data.  相似文献   

8.
In this paper I offer a fresh interpretation of Leibniz’s theory of space, in which I explain the connection of his relational theory to both his mathematical theory of analysis situs and his theory of substance. I argue that the elements of his mature theory are not bare bodies (as on a standard relationalist view) nor bare points (as on an absolutist view), but situations. Regarded as an accident of an individual body, a situation is the complex of its angles and distances to other co-existing bodies, founded in the representation or state of the substance or substances contained in the body. The complex of all such mutually compatible situations of co-existing bodies constitutes an order of situations, or instantaneous space. Because these relations of situation change from one instant to another, space is an accidental whole that is continuously changing and becoming something different, and therefore a phenomenon. As Leibniz explains to Clarke, it can be represented mathematically by supposing some set of existents hypothetically (and counterfactually) to remain in a fixed mutual relation of situation, and gauging all subsequent situations in terms of transformations with respect to this initial set. Space conceived in terms of such allowable transformations is the subject of Analysis Situs. Finally, insofar as space is conceived in abstraction from any bodies that might individuate the situations, it encompasses all possible relations of situation. This abstract space, the order of all possible situations, is an abstract entity, and therefore ideal.  相似文献   

9.
One of the most interesting and entertaining philosophical discussions of the last few decades is the discussion between Daniel Dennett and John Searle on the existence of intrinsic intentionality. Dennett denies the existence of phenomena with intrinsic intentionality. Searle, however, is convinced that some mental phenomena exhibit intrinsic intentionality. According to me, this discussion has been obscured by some serious misunderstandings with regard to the concept ‘intrinsic intentionality’. For instance, most philosophers fail to realize that it is possible that the intentionality of a phenomenon is partly intrinsic and partly observer relative. Moreover, many philosophers are mixing up the concepts ‘original intentionality’ and ‘intrinsic intentionality’. In fact, there is, in the philosophical literature, no strict and unambiguous definition of the concept ‘intrinsic intentionality’. In this article, I will try to remedy this. I will also try to give strict and unambiguous definitions of the concepts ‘observer relative intentionality’, ‘original intentionality’, and ‘derived intentionality’. These definitions will be used for an examination of the intentionality of formal mathematical systems. In conclusion, I will make a comparison between the (intrinsic) intentionality of formal mathematical systems on the one hand, and the (intrinsic) intentionality of human beings on the other hand.  相似文献   

10.
In this paper I propose a new approach to the foundation of mathematics: non-monotonic set theory. I present two completely different methods to develop set theories based on adaptive logics. For both theories there is a finitistic non-triviality proof and both theories contain (a subtle version of) the comprehension axiom schema. The first theory contains only a maximal selection of instances of the comprehension schema that do not lead to inconsistencies. The second allows for all the instances, also the inconsistent ones, but restricts the conclusions one can draw from them in order to avoid triviality. The theories have enough expressive power to form a justification/explication for most of the established results of classical mathematics. They are therefore not limited by Gödel’s incompleteness theorems. This remarkable result is possible because of the non-recursive character of the final proofs of theorems of non-monotonic theories. I shall argue that, precisely because of the computational complexity of these final proofs, we cannot claim that non-monotonic theories are ideal foundations for mathematics. Nevertheless, thanks to their strength, first order language and the recursive dynamic (defeasible) proofs of theorems of the theory, the non-monotonic theories form (what I call) interesting pragmatic foundations.  相似文献   

11.
When is conceptual change so significant that we should talk about a new theory, not a new version of the same theory? We address this problem here, starting from Gould’s discussion of the individuation of the Darwinian theory. He locates his position between two extremes: ‘minimalist’—a theory should be individuated merely by its insertion in a historical lineage—and ‘maximalist’—exhaustive lists of necessary and sufficient conditions are required for individuation. He imputes the minimalist position to Hull and attempts a reductio: this position leads us to give the same ‘name’ to contradictory theories. Gould’s ‘structuralist’ position requires both ‘conceptual continuity’ and descent for individuation. Hull’s attempt to assimilate into his general selectionist framework Kuhn’s notion of ‘exemplar’ and the ‘semantic’ view of the structure of scientific theories can be used to counter Gould’s reductio, and also to integrate structuralist and population thinking about conceptual change.  相似文献   

12.
In this paper we will propose an empirical analysis of spatial and temporal boundaries. Unlike other proposals, which deal mainly with the commonsense level of the subject, we will ground our explication on well-established scientific practice and language. In this way we show how to reconsider in an innovative way questions such as the distinction between the bona fide boundaries and the fiat boundaries, the thickness and the ownership of the boundaries. At the same time we propose a division between ex mensura boundaries and qui vulgo dicuntur boundaries. What is it, therefore, that divides the atmosphere from the water? [Leonardo da Vinci, Notebooks]   相似文献   

13.
In the election of a hierarchical clustering method, theoretic properties may give some insight to determine which method is the most suitable to treat a clustering problem. Herein, we study some basic properties of two hierarchical clustering methods: α-unchaining single linkage or SL(α) and a modified version of this one, SL?(α). We compare the results with the properties satisfied by the classical linkage-based hierarchical clustering methods.  相似文献   

14.
In the course of the history of science, some concepts have forged theoretical foundations, constituting paradigms that hold sway for substantial periods of time. Research on the history of explanations of the action of one body on another is a testament to the periodic revival of one theory in particular, namely, the theory of ether. Even after the foundation of modern Physics, the notion of ether has directly and indirectly withstood the test of time. Through a spontaneous physics philosophical analysis, this article will explore how certain aspects of the concept of ether have appeared in different branches of the history of science.  相似文献   

15.
16.
In his Foundations of a General Theory of Manifolds, Georg Cantor praised Bernard Bolzano as a clear defender of actual infinity who had the courage to work with infinite numbers. At the same time, he sharply criticized the way Bolzano dealt with them. Cantor’s concept was based on the existence of a one-to-one correspondence, while Bolzano insisted on Euclid’s Axiom of the whole being greater than a part. Cantor’s set theory has eventually prevailed, and became a formal basis of contemporary mathematics, while Bolzano’s approach is generally considered a step in the wrong direction. In the present paper, we demonstrate that a fragment of Bolzano’s theory of infinite quantities retaining the part-whole principle can be extended to a consistent mathematical structure. It can be interpreted in several possible ways. We obtain either a linearly ordered ring of finite and infinitely great quantities, or a partially ordered ring containing infinitely small, finite and infinitely great quantities. These structures can be used as a basis of the infinitesimal calculus similarly as in non-standard analysis, whether in its full version employing ultrafilters due to Abraham Robinson, or in the recent “cheap version” avoiding ultrafilters due to Terence Tao.  相似文献   

17.
Relativity Theory by Albert Einstein has been so far littleconsidered by cognitive scientists, notwithstanding its undisputedscientific and philosophical moment. Unfortunately, we don't have adiary or notebook as cognitively useful as Faraday's. But physicshistorians and philosophers have done a great job that is relevant bothfor the study of the scientist's reasoning and the philosophy ofscience. I will try here to highlight the fertility of a `triangulation'using cognitive psychology, history of science and philosophy of sciencein starting answering a clearly very complex question:why did Einstein discover Relativity Theory? Here we arenot much concerned with the unending question of precisely whatEinstein discovered, that still remains unanswered, for we have noconsensus over the exact nature of the theory's foundations(Norton 1993). We are mainly interested in starting to answer the`how question', and especially the following sub-question: what(presumably) were his goals and strategies in hissearch? I will base my argument on fundamental publications ofEinstein, aiming at pointing out a theory-specific heuristic, settingboth a goal and a strategy: covariance/invariance.The result has significance in theory formation in science, especiallyin concept and model building. It also raises other questions that gobeyond the aim of this paper: why was he so confident in suchheuristic? Why didn't many other scientists use it? Where did he keep ? such a heuristic? Do we have any other examples ofsimilar heuristic search in other scientific problemsolving?  相似文献   

18.
In this study, we consider the type of interval data summarizing the original samples (individuals) with classical point data. This type of interval data are termed interval symbolic data in a new research domain called, symbolic data analysis. Most of the existing research, such as the (centre, radius) and [lower boundary, upper boundary] representations, represent an interval using only the boundaries of the interval. However, these representations hold true only under the assumption that the individuals contained in the interval follow a uniform distribution. In practice, such representations may result in not only inconsistency with the facts, since the individuals are usually not uniformly distributed in many application aspects, but also information loss for not considering the point data within the intervals during the calculation. In this study, we propose a new representation of the interval symbolic data considering the point data contained in the intervals. Then we apply the city-block distance metric to the new representation and propose a dynamic clustering approach for interval symbolic data. A simulation experiment is conducted to evaluate the performance of our method. The results show that, when the individuals contained in the interval do not follow a uniform distribution, the proposed method significantly outperforms the Hausdorff and city-block distance based on traditional representation in the context of dynamic clustering. Finally, we give an application example on the automobile data set.  相似文献   

19.
With scale relativity theory, Laurent Nottale has provided a powerful conceptual and mathematical framework with numerous validated predictions that has fundamental implications and applications for all sciences. We discuss how this extended framework reviewed in Nottale (Found Sci 152 (3):101–152, 2010a) may help facilitating integration across multiple size and time frames in systems biology, and the development of a scale relative biology with increased explanatory power.  相似文献   

20.
This paper studies the Lockean thesis from the perspective of contemporary epistemic logic. The Lockean thesis states that belief can be defined as ‘sufficiently high degree of belief’. Its main problem is that it gives rise to a notion of belief which is not closed under conjunction. This problem is typical for classical epistemic logic: it is single-agent and static. I argue that from the perspective of contemporary epistemic logic, the Lockean thesis fares much better. I briefly mention that it can successfully be extended from single-agent to multi-agent settings. More importantly, I show that accepting the Lockean thesis (and a more sophisticated version for conditional beliefs) leads to a significant and unexpected unification in the dynamic behavior of (conditional) belief and high (conditional) probability with respect to public announcements. This constitutes a methodological argument in favor of the Lockean thesis. Furthermore, if one accepts Baltag’s Erlangen program for epistemology, this technical observation has even stronger philosophical implications: because belief and high probability display the same dynamic behavior, it is plausible that they are indeed one and the same epistemic notion.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号