首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 265 毫秒
1.
Complex media fusion operations can be costly in terms of the time they need to process input objects. If data arrive faster to fusion nodes than the speed with which they can consume the inputs, this will result in some input objects not being processed. In this paper, we develop load shedding mechanisms which take into consideration both data quality and expensive nature of media fusion operators. In particular, we present quality assessment models for objects and multistream fusion operators and highlight that such quality assessments may impose partial orders on objects. We highlight that the most effective load control approach for fusion operators involves shedding of (not the individual input objects but) combinations of objects. Yet, identifying suitable combinations of objects in real time will not be possible if efficient combination selection algorithms do not exist. We develop efficient combination selection schemes for scenarios with different quality assessment and target characteristics. We first develop efficient combination-based load shedding when the fusion operator has unambiguously monotone semantics. We then extend this to the more general ambiguously monotone case and present experimental results that show the performance gains using quality-aware combination-based load shedding strategies under the various fusion scenarios.  相似文献   

2.
Information fusion is an efficient way to detect the specified events and extract useful information, especially in the context of big data. As a large-scale data-gathering system, Internet of Things (IoT) has the traffic with the mixed timing characteristics. The real-time observations with various delay constraints and the non-real-time observations are needed in information fusion. In order to guarantee the performance of Distributed Information Fusion (DIF), the paper focuses on the communication mechanism from the perspective of real-time delivery of sensing data. An online scheduling algorithm and its distributed implementation, named Delay-Guaranteed CSMA, are proposed. Both the timing constraints and the historical transmission statistics of sensors are taking into consideration. The simulation results have shown that the proposed policy achieves good delay-guaranteed satisfaction. The goal of real-time data delivery for distributed information fusion is achieved.  相似文献   

3.
Most of the decision procedures for symbolic analysis of protocols are limited to a fixed set of algebraic operators associated with a fixed intruder theory. Examples of such sets of operators comprise XOR, multiplication, abstract encryption/decryption. In this report we give an algorithm for combining decision procedures for arbitrary intruder theories with disjoint sets of operators, provided that solvability of ordered intruder constraints, a slight generalization of intruder constraints, can be decided in each theory. This is the case for most of the intruder theories for which a decision procedure has been given. In particular our result allows us to decide trace-based security properties of protocols that employ any combination of the above mentioned operators with a bounded number of sessions.  相似文献   

4.
基于近邻传播算法的半监督聚类   总被引:31,自引:2,他引:29  
肖宇  于剑 《软件学报》2008,19(11):2803-2813
提出了一种基于近邻传播(affinity propagation,简称AP)算法的半监督聚类方法.AP是在数据点的相似度矩阵的基础上进行聚类.对于规模很大的数据集,AP算法是一种快速、有效的聚类方法,这是其他传统的聚类算法所不能及的,比如:K中心聚类算法.但是,对于一些聚类结构比较复杂的数据集,AP算法往往不能得到很好的聚类结果.使用已知的标签数据或者成对点约束对数据形成的相似度矩阵进行调整,进而达到提高AP算法的聚类性能.实验结果表明,该方法不仅提高了AP对复杂数据的聚类结果,而且在约束对数量较多时,该方法要优于相关比对算法.  相似文献   

5.
Constrained clustering received a lot of attention in the last years. However, the widely used pairwise constraints are not generally applicable for hierarchical clustering, where the goal is to derive a cluster hierarchy instead of a flat partition. Therefore, we propose for the hierarchical setting—based on the ideas of pairwise constraints—the use of must-link-before (MLB) constraints. In this paper, we discuss their properties and present an algorithm that is able to create a hierarchy by considering these constraints directly. Furthermore, we propose an efficient data structure for its implementation and evaluate its effectiveness with different datasets in a text clustering scenario.  相似文献   

6.
7.
Dempster’s rule is traditionally interpreted as an operator for fusing belief functions. While there are different types of belief fusion, there has been considerable confusion regarding the exact type of operation that Dempster’s rule performs. Many alternative operators for belief fusion have been proposed, where some are based on the same fundamental principle as Dempster’s rule, and others have a totally different basis, such as the cumulative and averaging fusion operators. In this article, we analyze Dempster’s rule from a statistical and frequentist perspective and compare it with cumulative and averaging belief fusion. We prove, and illustrate by examples on colored balls, that Dempster’s rule in fact represents a method for serial combination of stochastic constraints. Consequently, Dempster’s rule is not a method for cumulative fusion of belief functions under the assumption that subjective beliefs are an extension of frequentist beliefs. Having identified the true nature of Dempster’s rule, appropriate applications of Dempster’s rule of combination are described such as the multiplication of orthogonal belief functions, and the combination of preferences dictated by different parties.  相似文献   

8.
More and more data fusion models contain state constraints with valuable information in the filtering process.In this study,an optimal filter of risk-sensitive with quasi-equality constraints is formul...  相似文献   

9.
Local Search Genetic Algorithms for the Job Shop Scheduling Problem   总被引:6,自引:1,他引:6  
In previous work, we developed three deadlock removal strategies for the job shop scheduling problem (JSSP) and proposed a hybridized genetic algorithm for it. While the genetic algorithm (GA) gave promising results, its performance depended greatly on the choice of deadlock removal strategies employed. This paper introduces a genetic algorithm based scheduling scheme that is deadlock free. This is achieved through the choice of chromosome representation and genetic operators. We propose an efficient solution representation for the JSSP in which the job task ordering constraints are easily encoded. Furthermore, a problem specific crossover operator that ensures solutions generated through genetic evolution are all feasible is also proposed. Hence, both checking of the constraints and repair mechanism can be avoided, thus resulting in increased efficiency. A mutation-like operator geared towards local search is also proposed which further improves the solution quality. Lastly, a hybrid strategy using the genetic algorithm reinforced with a tabu search is developed. An empirical study is carried out to test the proposed strategies.  相似文献   

10.
Time-constrained service plays an important role in ubiquitous services. However, the resource constraints of ubiquitous computing systems make it difficult to satisfy timing requirements of supported strategies. In this study, we study scheduling strategies for mobile data program with timing constraints in the form of deadlines. Unlike previously proposed scheduling algorithms for mobile systems which aim to minimize the mean access time, our goal is to identify scheduling algorithms for ubiquitous systems that ensure requests meet their deadlines. We present a study of the performance of traditional real-time strategies, and demonstrate that traditional real-time algorithms do not always perform the best in a mobile environment. We propose an efficient scheduling algorithm, called scheduling priority of mobile data with time constraint(SPMT), which is designed for timely delivery of data to mobile clients. The experimental results show that our approach outperforms other approaches over performance criteria.  相似文献   

11.
POTMiner: mining ordered, unordered, and partially-ordered trees   总被引:1,自引:0,他引:1  
Non-linear data structures are becoming more and more common in data mining problems. Trees, in particular, are amenable to efficient mining techniques. In this paper, we introduce a scalable and parallelizable algorithm to mine partially-ordered trees. Our algorithm, POTMiner, is able to identify both induced and embedded subtrees in such trees. As special cases, it can also handle both completely ordered and completely unordered trees.  相似文献   

12.
Prior knowledge of the input–output problems often leads to supervised learning restrictions that can hamper the multi-layered perceptron’s (MLP) capacity to find an optimal solution. Restrictions such as fixing weights and modifying input variables may influence the potential convergence of the back-propagation algorithm. This paper will show mathematically how to handle such constraints in order to obtain a modified version of the traditional MLP capable of solving targeted problems. More specifically, it will be shown that fixing particular weights according to prior information as well as transforming incoming inputs can enable the user to limit the MLP search to a desired type of solution. The ensuing modifications pertaining to the learning algorithm will be established. Moreover, four supervised improvements will offer insight on how to control the convergence of the weights towards an optimal solution. Finally, applications involving packing and covering problems will be used to illustrate the potential and performance of this modified MLP.  相似文献   

13.
We consider the Weighted Constraint Satisfaction Problem which is an important problem in Artificial Intelligence. Given a set of variables, their domains and a set of constraints between variables, our goal is to obtain an assignment of the variables to domain values such that the weighted sum of satisfied constraints is maximized. In this paper, we present a new approach based on randomized rounding of semidefinite programming relaxation. Besides having provable worst-case bounds for domain sizes 2 and 3, our algorithm is simple and efficient in practice, and produces better solutions than some other polynomial-time algorithms such as greedy and randomized local search.  相似文献   

14.
In this paper we consider the problem of discovering sequential patterns by handling time constraints as defined in the Gsp algorithm. While sequential patterns could be seen as temporal relationships between facts embedded in the database where considered facts are merely characteristics of individuals or observations of individual behavior, generalized sequential patterns aim to provide the end user with a more flexible handling of the transactions embedded in the database. We thus propose a new efficient algorithm, called Gtc (Graph for Time Constraints) for mining such patterns in very large databases. It is based on the idea that handling time constraints in the earlier stage of the data mining process can be highly beneficial. One of the most significant new feature of our approach is that handling of time constraints can be easily taken into account in traditional levelwise approaches since it is carried out prior to and separately from the counting step of a data sequence. Our test shows that the proposed algorithm performs significantly faster than a state-of-the-art sequence mining algorithm.  相似文献   

15.
Locality-preserved maximum information projection.   总被引:3,自引:0,他引:3  
Dimensionality reduction is usually involved in the domains of artificial intelligence and machine learning. Linear projection of features is of particular interest for dimensionality reduction since it is simple to calculate and analytically analyze. In this paper, we propose an essentially linear projection technique, called locality-preserved maximum information projection (LPMIP), to identify the underlying manifold structure of a data set. LPMIP considers both the within-locality and the between-locality in the processing of manifold learning. Equivalently, the goal of LPMIP is to preserve the local structure while maximize the out-of-locality (global) information of the samples simultaneously. Different from principal component analysis (PCA) that aims to preserve the global information and locality-preserving projections (LPPs) that is in favor of preserving the local structure of the data set, LPMIP seeks a tradeoff between the global and local structures, which is adjusted by a parameter alpha, so as to find a subspace that detects the intrinsic manifold structure for classification tasks. Computationally, by constructing the adjacency matrix, LPMIP is formulated as an eigenvalue problem. LPMIP yields orthogonal basis functions, and completely avoids the singularity problem as it exists in LPP. Further, we develop an efficient and stable LPMIP/QR algorithm for implementing LPMIP, especially, on high-dimensional data set. Theoretical analysis shows that conventional linear projection methods such as (weighted) PCA, maximum margin criterion (MMC), linear discriminant analysis (LDA), and LPP could be derived from the LPMIP framework by setting different graph models and constraints. Extensive experiments on face, digit, and facial expression recognition show the effectiveness of the proposed LPMIP method.  相似文献   

16.
Feedforward neural networks (FNNs) have been proposed to solve complex problems in pattern recognition and classification and function approximation. Despite the general success of learning methods for FNNs, such as the backpropagation (BP) algorithm, second-order optimization algorithms and layer-wise learning algorithms, several drawbacks remain to be overcome. In particular, two major drawbacks are convergence to a local minima and long learning time. We propose an efficient learning method for a FNN that combines the BP strategy and optimization layer by layer. More precisely, we construct the layer-wise optimization method using the Taylor series expansion of nonlinear operators describing a FNN and propose to update weights of each layer by the BP-based Kaczmarz iterative procedure. The experimental results show that the new learning algorithm is stable, it reduces the learning time and demonstrates improvement of generalization results in comparison with other well-known methods.  相似文献   

17.
We introduce a unified optimization framework for geometry processing based on shape constraints. These constraints preserve or prescribe the shape of subsets of the points of a geometric data set, such as polygons, one‐ring cells, volume elements, or feature curves. Our method is based on two key concepts: a shape proximity function and shape projection operators. The proximity function encodes the distance of a desired least‐squares fitted elementary target shape to the corresponding vertices of the 3D model. Projection operators are employed to minimize the proximity function by relocating vertices in a minimal way to match the imposed shape constraints. We demonstrate that this approach leads to a simple, robust, and efficient algorithm that allows implementing a variety of geometry processing applications, simply by combining suitable projection operators. We show examples for computing planar and circular meshes, shape space exploration, mesh quality improvement, shape‐preserving deformation, and conformal parametrization. Our optimization framework provides a systematic way of building new solvers for geometry processing and produces similar or better results than state‐of‐the‐art methods.  相似文献   

18.
In this paper we present a framework for the cooperation of symbolic and propagation-based numerical solvers over the real numbers. This cooperation is expressed in terms of fixed points of closure operators over a complete lattice of constraint systems. In a second part we instantiate this framework to a particular cooperation scheme, where propagation is associated to pruning operators implementing interval algorithms enclosing the possible solutions of constraint systems, whereas symbolic methods are mainly devoted to generate redundant constraints. When carefully chosen, it is well known that the addition of redundant constraint drastically improve the performances of systems based on local consistency (e.g. Prolog IV or Newton). We propose here a method which computes sets of redundant polynomials called partial Gröbner bases and show on some benchmarks the advantages of such computations.  相似文献   

19.
Abstract

In designing learning algorithms it seems quite reasonable to construct them in a way such that all data the algorithm already has obtained are correctly and completely reflected in the hypothesis the algorithm outputs on these data. However, this approach may totally fail, i.e. it may lead to the unsolvability of the learning problem, or it may exclude any efficient solution of it. In particular, we present a natural learning problem and prove that it can be solved in polynomial time if and only if the algorithm is allowed to ignore data.  相似文献   

20.
Searching the hypothesis space bounded below by a bottom clause is the basis of several state-of-the-art ILP systems (e.g. Progol, Aleph). These systems use refinement operators together with search heuristics to explore a bounded hypothesis space. It is known that the search space of these systems is limited to a sub-graph of the general subsumption lattice. However, the structure and properties of this sub-graph have not been properly characterised. In this paper firstly, we characterise the hypothesis space considered by the ILP systems which use a bottom clause to constrain the search. In particular, we discuss refinement in Progol as a representative of these ILP systems. Secondly, we study the lattice structure of this bounded hypothesis space. Thirdly, we give a new analysis of refinement operators, least generalisation and greatest specialisation in the subsumption order relative to a bottom clause. The results of this study are important for better understanding of the constrained refinement space of ILP systems such as Progol and Aleph, which proved to be successful for solving real-world problems (despite being incomplete with respect to the general subsumption order). Moreover, characterising this refinement sub-lattice can lead to more efficient ILP algorithms and operators for searching this particular sub-lattice. For example, it is shown that, unlike for the general subsumption order, efficient least generalisation operators can be designed for the subsumption order relative to a bottom clause.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号