首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
The application of Data Envelopment Analysis (DEA) as a tool for efficiency evaluation has become widespread in public and private sector organizations. Since decision makers are often interested in a complete ranking of the evaluated units according to their performance, procedures that effectively discriminate the units are of key importance for designing intelligent decision support systems to measure and evaluate different alternatives for a better allocation of resources. This paper proposes a new method for ranking alternatives that uses common-weight DEA under a multiobjective optimization approach. The concept of distance to an ideal is thereby used as a means of selecting a set of weights that puts all the decision units in a favorable position in a simultaneous sense. Some numerical examples and a thorough computational experiment show that the approach followed here provides sound results for ranking alternatives and outperforms other known methods in discriminating the alternatives, therefore encouraging its use as a valuable decision tool for managers and policy makers.  相似文献   

2.
A unified approach to ranking in probabilistic databases   总被引:1,自引:0,他引:1  
Ranking is a fundamental operation in data analysis and decision support and plays an even more crucial role if the dataset being explored exhibits uncertainty. This has led to much work in understanding how to rank the tuples in a probabilistic dataset in recent years. In this article, we present a unified approach to ranking and top-k query processing in probabilistic databases by viewing it as a multi-criterion optimization problem and by deriving a set of features that capture the key properties of a probabilistic dataset that dictate the ranked result. We contend that a single, specific ranking function may not suffice for probabilistic databases, and we instead propose two parameterized ranking functions, called PRF ω and PRF e, that generalize or can approximate many of the previously proposed ranking functions. We present novel generating functions-based algorithms for efficiently ranking large datasets according to these ranking functions, even if the datasets exhibit complex correlations modeled using probabilistic and/xor trees or Markov networks. We further propose that the parameters of the ranking function be learned from user preferences, and we develop an approach to learn those parameters. Finally, we present a comprehensive experimental study that illustrates the effectiveness of our parameterized ranking functions, especially PRF e, at approximating other ranking functions and the scalability of our proposed algorithms for exact or approximate ranking.  相似文献   

3.
In this paper we address the problem of providing an order of relevance, or ranking, among entities’ properties used in RDF datasets, Linked Data and SPARQL endpoints. We first motivate the importance of ranking RDF properties by providing two killer applications for the problem, namely property tagging and entity visualization. Moved by the desiderata of these applications, we propose to apply Machine Learning to Rank (MLR) techniques to the problem of ranking RDF properties. Our devised solution is based on a deep empirical study of all the dimensions involved: feature selection, MLR algorithm and Model training. The major advantages of our approach are the following: (a) flexibility/personalization, as the properties’ relevance can be user-specified by personalizing the training set in a supervised approach, or set by a novel automatic classification approach based on SWiPE; (b) speed, since it can be applied without computing frequencies over the whole dataset, leveraging existing fast MLR algorithms; (c) effectiveness, as it can be applied even when no ontology data is available by using novel dataset-independent features; (d) precision, which is high both in terms of f-measure and Spearman’s rho. Experimental results show that the proposed MLR framework outperform the two existing approaches found in literature which are related to RDF property ranking.  相似文献   

4.
One approach to data analysis in recognition problems on precedents is investigated. The search task for logical regularities of the classes is considered. The concept of elementary predicate is introduced. This predicate determines the belonging of the object to any half-space in the space of features. The logical regularities, which are the disjunctive forms from elementary predicates, are examined. The search methods for these logical regularities are proposed. These methods are based on the constructing convex hulls of subsets of the training sample.  相似文献   

5.
The problem of employing linguistic variables to rank alternatives across a set of criteria is investigated, with emphasis placed on modelling the decision-maker's reasoning process. Given a set of alternatives, decision-makers often do not make conclusions immediately, instead evaluating them in the light of a given set of criteria and then synthesizing the knowledge obtained from the evaluation. Both evaluation and synthesis are usually expressed linguistically rather than numerically. A fuzzy mathematical model is employed to represent this sort of linguistic evaluation and synthesis. An example is presented to illustrate the basic idea and technique. This example is also used to compare the proposed technique with another existing technique. The model can be employed in expert systems for making inferences on multi-criteria problems.  相似文献   

6.
Logical inference is of central importance in the information and decision sciences but presents a very hard computational problem. Since the traditional symbolic inference methods have had limited success on large knowledge bases, this papers investigates a quantitative approach. It surveys the application of integer programming methods to inference problems in propositional logic. It displays a number of remarkable parallels between logic and mathematics and shows that these can lead to fast inference methods, both quantitative and symbolic. In particular it explains why the logical concepts of resolution, extended resolution, input and unit refutation, the Davis-Putnam procedure, and drawing of inferences pertinent to a given topic are closely related to the mathematical concepts of cutting planes, Chvátal's method, elementary closure, branch and bound, and projection of a polytope, respectively. Much of the paper should be intelligible to persons with limited background in logic and mathematical programming, but recent mathematical results are stated precisely.  相似文献   

7.
To enhance security in dynamic networks, it is important to evaluate the vulnerabilities and offer economic and practical patching strategy since vulnerability is the major driving force for attacks. In this paper, a hybrid ranking approach is presented to estimate vulnerabilities under the dynamic scenarios, which is a combination of low-level rating for vulnerability instances and high-level evaluation for the security level of the network system. Moreover, a novel quantitative model, an adapted attack graph, is also proposed to escaping isolated scoring, which takes the dynamic and logic relations among exploits into account, and significantly benefits to vulnerability analysis. To validate applicability and performance of our approach, a hybrid ranking case is implemented as experimental platform. The ranking results show that our approach differentiates the influential levels among vulnerabilities under dynamic attacking scenarios and economically enhances the security of network system.  相似文献   

8.
A simple approach to ranking a group of aggregated fuzzy utilities   总被引:2,自引:0,他引:2  
When ranking a large quantity of fuzzy numbers, the efficiency, accuracy, and effectiveness of the ranking process is critical. The paper considers the application of "alpha-cut" and "fuzzy arithmetic operations" to the fuzzy weighted average (FWA) method which can be used to rank aggregated fuzzy utilities (or generalized fuzzy numbers). The purpose of this application is to make the method easier to program and the data easier to manipulate, which results in a more practical method for fuzzy decisions.  相似文献   

9.
Abstract: The SKADE system models expertise in corporate settlement decisions using the blackboard approach. The full model has four knowledge sources: General Counsel, Attorney, Manager and Insurance Adjuster. The combined expertise from each of these is required to make the settlement decision. A control component in the model coordinates the activities of the various knowledge sources. Based on the latest data entries on the blackboard, the control selects and executes the next knowledge source. The blackboard model reproduces the experts' opportunistic reasoning processes by the interaction between the various knowledge sources. The results of analyses of a hypothetical case through a series of experiments with the SKADE system indicate that the blackboard is an appropriate model for development of multiple cooperative expert systems in the settlement decision domain. Compared to straight rule-based models, this blackboard provides more efficient problem solving. The initial success with the blackboard model suggests that further work needs to be done to see whether more complex models can be built to incorporate a broader range of determinants of settlement decisions.  相似文献   

10.
The term information overload was already used back in the 1970s by Alvin Toffler in his book Future Shock, and refers to the difficulty to understand and make decisions when too much information is available. In the era of Big Data, this problem becomes much more dramatic, since users may be literally overwhelmed by the cataract of data accessible in the most varied forms. With context-aware data tailoring, given a target application, in each specific context the system allows the user to access only the view which is relevant for that application in that context. Moreover, the relative importance of information to the same user in a different context or, reciprocally, to a different user in the same context, may vary enormously; for this reason, contextual preferences can be used to further refine the views associated with contexts, by imposing a ranking on the data of each context-aware view. In this paper, we propose a methodology and a system, PREMINE (PREference MINEr), where data mining is adopted to infer contextual preferences from the past interaction of the user with contextual views over a relational database, gathering knowledge in terms of association rules between each context and the relevant data.  相似文献   

11.
This paper proposes several goal programming (GP) models for estimating the performance measure weights of firms by means of constrained regression. Since some single-criterion performance measures are usually in conflict, we propose two opposed alternatives for determining multiple-criterion performance: the first is to calculate a consensus performance that reflects the majority trend of the single-criterion measures and the other is to calculate a performance that is biased towards the measures that show the most discrepancy with the rest. GP makes it possible to model both approaches as well as a compromise between the two extremes. Using two case studies reported in the literature and introducing another one examining non-financial companies listed in Ibex-35, we compare our proposal with other methods such as CRITIC and a modified version of TOPSIS. In order to improve the comparisons a Montecarlo simulation has been performed in all three case studies.Scope and purposeThe study falls into the area of multiple-criteria analysis of business performance. Firms are obliged to report a vast amount of financial information at regular intervals, and for this there is a wide range of performance measures. Multicriteria performance is calculated from the single-criterion measures and is then used to draw up rankings of firms. As a complement to the other multicriteria methods described in the literature, we propose the use of GP for implementing two quite different strategies: overweighting the measures in line with the general trend or overweighting the measures that conflict with the rest. Besides the use of Spearman's correlation, we introduce two other measures for comparing the solutions obtained.  相似文献   

12.
This paper explores the potential of machine learning algorithms (MLAs) for the simulation of intercity networks. To this end, we implement the random forest MLA to simulate the intercity corporate networks created by Fortune China 500 firms in mainland China. The random forest MLA does not require a predefined model but detects patterns directly from the data to automatically build models. The city-dyad connectivities were computed using an interlocking network model and treated as target variables. City factors and geographical factors were treated as features. The model was trained using a 2010 training set and subsequently validated using 2010 and 2017 test sets. The results are promising, with the pseudo R2 of the model coupled with different test data ranging from 0.861 to 0.940. Nonetheless, the random forest MLA also faces some challenges in the context of the simulation of intercity networks. We conclude that MLAs are potentially useful for specific applications such as the analysis of network big data, scenario simulation in regional planning, and the detection of driving forces in exploratory research.  相似文献   

13.
A program for evaluating the performance of competing ranking algorithms in stratigraphic paleontology is presented. The program (1) generates a hypothetical, and thus known, succession of taxa in time and (2) simulates their succession in strata at several local sample sites. If desired, (1) and (2) may be repeated for several (=50 or 100 for example) iterations and the local site data for each sent to two user routines for inferred rankings (inferred succession of events in time). First data for first and last occurrences (fads and lads) taken together, then data for for lads-only, then data for fads-only is sent. For each submission of data to a user routine, Kendall rank correlation coefficients and Spearman coefficients are computed comparing the inferred rankings generated by the user routine with the known succession of events in time. The performance of two competing ranking algorithms may be compared by (1) obtaining for each submitted dataset the differences between corresponding Kendall (and/or Spearman) coefficients computed for the two algorithms, and (2) testing the observed differences for statistical significance. A simple two-sided t-test may be used to test whether the observed mean difference between two corresponding coefficients differs significantly from zero; if ct-tests are performed, the level of significance of each should be set to alpha/c to obtain a maximum experimentwise error rate of less than alpha. The program is used to compare three ranking algorithms provided by Agterberg and Nel (1982a, b) as well as to determine whether the algorithms work as well for datasets combining lads and fads vs datasets for lads-only or fads-only. Agterberg and Nel's Presorting algorithm performed better than their Ranking or Scaling algorithm. All three performed slightly but significantly better on data for lads-only or fads-only as opposed to combined data.  相似文献   

14.
For a fixed-time free endpoint optimal control problem it is shown that the optimal feedback control satisfies a system of ordinary differential equations. They are obtained using an elimination procedure of the adjoint vector which appears linearly in a set of differential equations. These equations, involving Lie brackets of vector fields, are derived from the Maximum Principle. An application of this approach to robotics is given.  相似文献   

15.
Many real-world knowledge-based systems must deal with information coming from different sources that invariably leads to incompleteness, overspecification, or inherently uncertain content. The presence of these varying levels of uncertainty doesn’t mean that the information is worthless – rather, these are hurdles that the knowledge engineer must learn to work with. In this paper, we continue work on an argumentation-based framework that extends the well-known Defeasible Logic Programming (DeLP) language with probabilistic uncertainty, giving rise to the Defeasible Logic Programming with Presumptions and Probabilistic Environments (DeLP3E) model. Our prior work focused on the problem of belief revision in DeLP3E, where we proposed a non-prioritized class of revision operators called AFO (Annotation Function-based Operators) to solve this problem. In this paper, we further study this class and argue that in some cases it may be desirable to define revision operators that take quantitative aspects into account, such as how the probabilities of certain literals or formulas of interest change after the revision takes place. To the best of our knowledge, this problem has not been addressed in the argumentation literature to date. We propose the QAFO (Quantitative Annotation Function-based Operators) class of operators, a subclass of AFO, and then go on to study the complexity of several problems related to their specification and application in revising knowledge bases. Finally, we present an algorithm for computing the probability that a literal is warranted in a DeLP3E knowledge base, and discuss how it could be applied towards implementing QAFO-style operators that compute approximations rather than exact operations.  相似文献   

16.
Automatic document summarization aims to create a compressed summary that preserves the main content of the original documents. It is a well-recognized fact that a document set often covers a number of topic themes with each theme represented by a cluster of highly related sentences. More important, topic themes are not equally important. The sentences in an important theme cluster are generally deemed more salient than the sentences in a trivial theme cluster. Existing clustering-based summarization approaches integrate clustering and ranking in sequence, which unavoidably ignore the interaction between them. In this paper, we propose a novel approach developed based on the spectral analysis to simultaneously clustering and ranking of sentences. Experimental results on the DUC generic summarization datasets demonstrate the improvement of the proposed approach over the other existing clustering-based approaches.  相似文献   

17.
Under the current conditions of urban and regional renewal, it is a challenge and opportunity to find such decisions in planning policy supporting sustainable development. Redevelopment of former open‐cast mines and shrinking processes in cities are typical examples. Decision making in such a planning context considers complex tasks and involves preferential selection among different, usually competing alternatives. They result from demands of different spatial functions and the necessity to conserve the natural environment and landscape. The modeling of a planning process requires an adequate definition of the problem, identification of the main decision criteria and possible courses of action. Following environmental and institutional economic theory we use the idea of involving stakeholders. Determining and understanding the demands of stakeholders may lead to a successful management of environmental, social, human, and economic tasks. We propose a multicriteria approach to formulate a planning problem as a multiobjective optimization problem.  相似文献   

18.
Website Archivability (WA) is a notion established to capture the core aspects of a website, crucial in diagnosing whether it has the potential to be archived with completeness and accuracy. In this work, aiming at measuring WA, we introduce and elaborate on all aspects of CLEAR+, an extended version of the Credible Live Evaluation Method for Archive Readiness (CLEAR) method. We use a systematic approach to evaluate WA from multiple different perspectives, which we call Website Archivability Facets. We then analyse archiveready.com, a web application we created as the reference implementation of CLEAR+, and discuss the implementation of the evaluation workflow. Finally, we conduct thorough evaluations of all aspects of WA to support the validity, the reliability and the benefits of our method using real-world web data.  相似文献   

19.
The present work is a sequel to a recent one published on this journal where the superiority of ‘radial design’ to compute the ‘total sensitivity index’ was ascertained. Both concepts belong to sensitivity analysis of model output. A radial design is the one whereby starting from a random point in the hyperspace of the input factors one step in turn is taken for each factor. The procedure is iterated a number of times with a different starting random point as to collect a sample of elementary shifts for each factor. The total sensitivity index is a powerful sensitivity measure which can be estimated based on such a sample. Given the similarity between the total sensitivity index and a screening test known as method of the elementary effects (or method of Morris), we test the radial design on this method. Both methods are best practices: the total sensitivity index in the class of the quantitative measures and the elementary effects in that of the screening methods. We find that the radial design is indeed superior even for the computation of the elementary effects method. This opens the door to a sensitivity analysis strategy whereby the analyst can start with a small number of points (screening-wise) and then – depending on the results – possibly increase the numeral of points up to compute a fully quantitative measure. Also of interest to practitioners is that a radial design is nothing else than an iterated ‘One factor At a Time’ (OAT) approach. OAT is a radial design of size one. While OAT is not a good practice, modelers in all domains keep using it for sensitivity analysis for reasons discussed elsewhere (Saltelli and Annoni, 2010) [23]. With the present approach modelers are offered a straightforward and economic upgrade of their OAT which maintain OAT's appeal of having just one factor moved at each step.  相似文献   

20.
近年来,网络空间安全成为信息安全中的热门领域之一,随着复杂网络的研究日渐深入,网络空间安全与复杂网络的结合也变得日益密切。网络的整体安全性依赖于网络中具体节点的安全性,因此,对网络节点的安全重要程度进行有效排序变得极为关键,良好的排序方法应当将越重要的节点排在越靠前的位置。本文从网络的拓扑结构入手,研究了网络节点的局部关键性,在传统基础上考虑了相邻节点及次相邻节点的拓扑结构影响。同时,由于传统方法很少引入动态因素,因此本文引入了网络节点实时流量向量,算法既包含网络拓扑结构,又使用了不同时刻的节点流量,采用了静态与动态相结合的方式。实验结果表明,在破坏排序结果前top-n个节点时,与传统方法相比,本文算法在排序结果上具有更好的效果。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号