首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 312 毫秒
1.
This paper proposes a hybrid approach for solving the multidepot vehicle routing problem (MDVRP) with a limited number of identical vehicles per depot. Our approach, which only uses a few parameters, combines “biased randomization”—use of nonsymmetric probability distributions to generate randomness—with the iterated local search (ILS) metaheuristic. Two biased‐randomized processes are employed at different stages of the ILS framework in order to (a) assign customers to depots following a randomized priority criterion—this allows for fast generation of alternative allocation maps and (b) improving routing solutions associated with a “promising” allocation map—this is done by randomizing the classical savings heuristic. These biased‐randomized processes rely on the use of the geometric probability distribution, which is characterized by a single and bounded parameter. Being an approach with few parameters, our algorithm does not require troublesome fine‐tuning processes, which tend to be time consuming. Using standard benchmarks, the computational experiments show the efficiency of the proposed algorithm. Despite its hybrid nature, our approach is relatively easy to implement and can be parallelized in a very natural way, which makes it an interesting alternative for practical applications of the MDVRP.  相似文献   

2.
What does a “normal” computer (or social) network look like? How can we spot “abnormal” sub-networks in the Internet, or web graph? The answer to such questions is vital for outlier detection (terrorist networks, or illegal money-laundering rings), forecasting, and simulations (“how will a computer virus spread?”).The heart of the problem is finding the properties of real graphs that seem to persist over multiple disciplines. We list such patterns and “laws”, including the “min-cut plots” discovered by us. This is the first part of our NetMine package: given any large graph, it provides visual feedback about these patterns; any significant deviations from the expected patterns can thus be immediately flagged by the user as abnormalities in the graph. The second part of NetMine is the A-plots tool for visualizing the adjacency matrix of the graph in innovative new ways, again to find outliers. Third, NetMine contains the R-MAT (Recursive MATrix) graph generator, which can successfully model many of the patterns found in real-world graphs and quickly generate realistic graphs, capturing the essence of each graph in only a few parameters. We present results on multiple, large real graphs, where we show the effectiveness of our approach.  相似文献   

3.
In this paper, we introduce item-centric mining, a new semantics for mining long-tailed datasets. Our algorithm, TopPI, finds for each item its top-k most frequent closed itemsets. While most mining algorithms focus on the globally most frequent itemsets, TopPI guarantees that each item is represented in the results, regardless of its frequency in the database.TopPI allows users to efficiently explore Web data, answering questions such as “what are the k most common sets of songs downloaded together with the ones of my favorite artist?”. When processing retail data consisting of 55 million supermarket receipts, TopPI finds the itemset “milk, puff pastry” that appears 10,315 times, but also “frangipane, puff pastry” and “nori seaweed, wasabi, sushi rice” that occur only 1120 and 163 times, respectively. Our experiments with analysts from the marketing department of our retail partner demonstrate that item-centric mining discover valuable itemsets. We also show that TopPI can serve as a building-block to approximate complex itemset ranking measures such as the p-value.Thanks to efficient enumeration and pruning strategies, TopPI avoids the search space explosion induced by mining low support itemsets. We show how TopPI can be parallelized on multi-cores and distributed on Hadoop clusters. Our experiments on datasets with different characteristics show the superiority of TopPI when compared to standard top-k solutions, and to Parallel FP-Growth, its closest competitor.  相似文献   

4.
Successful implementations of simple direct adaptive control techniques in various domains of application have been presented over the last two decades in the technical literature. The theoretical background concerning basic conditions needed for stability of the controller and the open questions relating the convergence of the adaptive gains have been clarified recently, yet only for the continuous‐time algorithms. Apparently, asymptotic tracking in discrete time systems has been possible only with step input commands and the scope of the so called “almost strictly positive real (ASPR)” condition has also remained not clear. This paper expands the feasibility of discrete simple adaptive control methodology to include any desired input commands and almost all real‐world systems. The proofs of stability are also rigorously revised to solve the ultimate adaptive gain values question that has remained open until now. Finally, a complex algebraic loop that seemed to be inherent in discrete ASPR systems and might prevent the use of passivity properties in discrete systems has also been eliminated.  相似文献   

5.
Flow level information is important for many applications in network measurement and analysis. In this work, we tackle the “Top Spreaders” and “Top Scanners” problems, where hosts that are spreading the largest numbers of flows, especially small flows, must be efficiently and accurately identified. The identification of these top users can be very helpful in network management, traffic engineering, application behavior analysis, and anomaly detection.We propose novel streaming algorithms and a “Filter-Tracker-Digester” framework to catch the top spreaders and scanners online. Our framework combines sampling and streaming algorithms, as well as deterministic and randomized algorithms, in such a way that they can effectively help each other to improve accuracy while reducing memory usage and processing time. To our knowledge, we are the first to tackle the “Top Scanners” problem in a streaming way. We address several challenges, namely: traffic scale, skewness, speed, memory usage, and result accuracy. The performance bounds of our algorithms are derived analytically, and are also evaluated by both real and synthetic traces, where we show our algorithm can achieve accuracy and speed of at least an order of magnitude higher than existing approaches.  相似文献   

6.
This paper presents results of a comparative study with the objective to identify the most effective and efficient way of applying a local search method embedded in a hybrid algorithm. The hybrid metaheuristic employed in this study is called “DE–HS–HJ” because it is comprised of two cooperative metaheusitic algorithms, i.e., differential evolution (DE) and harmony search (HS), and one local search (LS) method, i.e., Hooke and Jeeves (HJ) direct search. Eighteen different ways of using HJ local search were implemented and all of them were evaluated with 19 problems, in terms of six performance indices, covering both accuracy and efficiency. Statistic analyses were conducted accordingly to determine the significance in performance differences. The test results show that overall the best three LS application strategies are applying local search to every generated solution with a specified probability and also to each newly updated solution (NUS + ESP), applying local search to every generated solution with a specified probability (ESP), and applying local search to every generated solution with probability and also to the updated current global best solution (EUGbest + ESP). ESP is found to be the best local search application strategy in terms of success rate. Integrating it with NUS further improve the overall performance. EUGbest + ESP is the most efficient and it is also able to achieve high level of accuracy (the fourth place in terms of success rate with an average above 0.9).  相似文献   

7.
During our digital social life, we share terabytes of information that can potentially reveal private facts and personality traits to unexpected strangers. Despite the research efforts aiming at providing efficient solutions for the anonymization of huge databases (including networked data), in online social networks the most powerful privacy protection “weapons” are the users themselves. However, most users are not aware of the risks derived by the indiscriminate disclosure of their personal data. Moreover, even when social networking platforms allow their participants to control the privacy level of every published item, adopting a correct privacy policy is often an annoying and frustrating task and many users prefer to adopt simple but extreme strategies such as “visible-to-all” (exposing themselves to the highest risk), or “hidden-to-all” (wasting the positive social and economic potential of social networking websites). In this paper we propose a theoretical framework to i) measure the privacy risk of the users and alert them whenever their privacy is compromised and ii) help the users customize semi-automatically their privacy settings by limiting the number of manual operations. By investigating the relationship between the privacy measure and privacy preferences of real Facebook users, we show the effectiveness of our framework.  相似文献   

8.
Observing that many visual effects (depth‐of‐field, motion blur, soft shadows, spectral effects) and several sampling modalities (time, stereo or light fields) can be expressed as a sum of many pinhole camera images, we suggest a novel efficient image synthesis framework that exploits coherency among those images. We introduce the notion of “distribution flow” that represents the 2D image deformation in response to changes in the high‐dimensional time‐, lens‐, area light‐, spectral‐, etc. coordinates. Our approach plans the optimal traversal of the distribution space of all required pinhole images, such that starting from one representative root image, which is incrementally changed (warped) in a minimal fashion, pixels move at most by one pixel, if at all. The incremental warping allows extremely simple warping code, typically requiring half a millisecond on an Nvidia Geforce GTX 980Ti GPU per pinhole image. We show, how the bounded sampling does introduce very little errors in comparison to re‐rendering or a common warping‐based solution. Our approach allows efficient previews for arbitrary combinations of distribution effects and imaging modalities with little noise and high visual fidelity.  相似文献   

9.
Case studies in asynchronous data parallelism   总被引:1,自引:0,他引:1  
Is the owner-computes style of parallelism, captured in a variety of data parallel languages, attractive as a paradigm for designing explicit control parallel codes? This question gives rise to a number of others. Will such use be unwieldy? Will the resulting code run well? What can such an approach offer beyond merely replicating, in a more labor intensive way, the services and coverage of data parallel languages? We investigate these questions via a simple example and “real world” case studies developed using C-Linda, a language for explicit parallel programming formed by the merger of C with the Linda coordination language. The results demonstrate owner-computes is an effective design strategy in Linda.  相似文献   

10.
In the after-assembly block manufacturing process in the shipbuilding industry, domain experts or industrial managers have the following questions regarding the first step in terms of reducing the overhead transportation cost due to irregularities not defined in a process design: “What tasks are bottlenecks?” and “How long do the blocks remain waiting in stockyards?” We provide the answers to these two questions. In the process mining framework, we propose a method automatically extracting the most frequent task flows from transport usage histories. Considering characteristics of our application, we use a clustering technique to identify heterogeneous groups of process instances, and then derive a process model independently by group. Process models extracted from real-world transportation logs, are verified by domain experts and labelled based on their interpretations. Consequently, we conceptualize the “standard process” from one global process model. Moreover, local models derived from groups of process instances reflect unknown context regarding characteristics of blocks. Our proposed method can provide conceptualized process models and process (or waiting in stockyards) times as a performance indicator. Providing reasonable answers to their questions, it helps domain experts better understand and manage the actual process. With the extension of the conventional methodology for our application problem, the main contributions of this research are that our proposed approach provides insight into the after-assembly block manufacturing process, and describes the first step for reducing transportation costs.  相似文献   

11.
Monte Carlo integration is firmly established as the basis for most practical realistic image synthesis algorithms because of its flexibility and generality. However, the visual quality of rendered images often suffers from estimator variance, which appears as visually distracting noise. Adaptive sampling and reconstruction algorithms reduce variance by controlling the sampling density and aggregating samples in a reconstruction step, possibly over large image regions. In this paper we survey recent advances in this area. We distinguish between “a priori” methods that analyze the light transport equations and derive sampling rates and reconstruction filters from this analysis, and “a posteriori” methods that apply statistical techniques to sets of samples to drive the adaptive sampling and reconstruction process. They typically estimate the errors of several reconstruction filters, and select the best filter locally to minimize error. We discuss advantages and disadvantages of recent state‐of‐the‐art techniques, and provide visual and quantitative comparisons. Some of these techniques are proving useful in real‐world applications, and we aim to provide an overview for practitioners and researchers to assess these approaches. In addition, we discuss directions for potential further improvements.  相似文献   

12.
We present a new theoretical framework for analyzing simulated annealing. The behavior of simulated annealing depends crucially on the ld?ergy landscape” associated with the optimization problem: the landscape must have special properties if annealing is to be efficient. We prove that certain fractal properties are sufficient for simulated annealing to be efficient in the following sense: If a problem is scaled to have best solutions of energy 0 and worst solutions of energy 1, a solution of expected energy no more than ? can be found in time polynomial in 1/?, where the exponent of the polynomial depends on certain parameters of the fractal. Higher-dimensional versions of the problem can be solved with almost identical efficiency. The cooling schedule used to achieve this result is the familiar geometric schedule of annealing practice, rather than the logarithmic schedule of previous theory. Our analysis is more realistic than those of previous studies of annealing in the constraints we place on the problem space and the conclusions we draw about annealing's performance. The mode of analysis is also new: Annealing is modeled as a random walk on a graph, and recent theorems relating the “conductance” of a graph to the mixing rate of its associated Markov chain generate both a new conceptual approach to annealing and new analytical, quantitative methods. The efficiency of annealing is compared with that of random sampling and descent algorithms. While these algorithms are more efficient for some fractals, their run times increase exponentially with the number of dimensions, making annealing better for problems of high dimensionality. We find that a number of circuit placement problems have energy landscapes with fractal properties, thus giving for the first time a reasonable explanation of the successful application of simulated annealing to problems in the VLSI domain.  相似文献   

13.
14.
This paper addresses the problem of locally verifying global properties. Several natural questions are studied, such as “how expensive is local verification?” and more specifically, “how expensive is local verification compared to computation?” A suitable model is introduced in which these questions are studied in terms of the number of bits a vertex needs to communicate. The model includes the definition of a proof labeling scheme (a pair of algorithms- one to assign the labels, and one to use them to verify that the global property holds). In addition, approaches are presented for the efficient construction of schemes, and upper and lower bounds are established on the bit complexity of schemes for multiple basic problems. The paper also studies the role and cost of unique identities in terms of impossibility and complexity, in the context of proof labeling schemes. Previous studies on related questions deal with distributed algorithms that simultaneously compute a configuration and verify that this configuration has a certain desired property. It turns out that this combined approach enables the verification to be less costly sometimes, since the configuration is typically generated so as to be easily verifiable. In contrast, our approach separates the configuration design from the verification. That is, it first generates the desired configuration without bothering with the need to verify it, and then handles the task of constructing a suitable verification scheme. Our approach thus allows for a more modular design of algorithms, and has the potential to aid in verifying properties even when the original design of the structures for maintaining them was done without verification in mind.  相似文献   

15.
This paper describes the results of a general theory of matrix codes correcting a set of given types of multiple errors. A detailed study has been made of certain matrix classes of these systematic binary error correcting codes that will correct typical errors of some digital channels. These codes published by Elias,(2,3) Hobb's,(5) and Voukalis(11) account for this theory and other new families of binary systematic matrix codes of arbitrary size, correcting random, burst and clusters of errors are given here. Also presented here are the basic ideas of each of these codes. We can easily find practical decoding algorithms for each of these codes. The characteristic calculation of the parity check equations that the information matrix codebook has to satisfy are also shown. Further on we deal with the optimum construction of these codes showing their use in certain applications. We answer questions such as: “What is the optimum size of the code?” “What is the best structure of the code?” “What is the probability of error correction and the mean error correction performance?” Consequently, in this paper we also describe the results of an extensive search for optimum matrix codes designed to correct a given set of multiple errors as well as their implementation.  相似文献   

16.
Consciousness, Agents and the Knowledge Game   总被引:2,自引:0,他引:2  
This paper has three goals. The first is to introduce the “knowledge game”, a new, simple and yet powerful tool for analysing some intriguing philosophical questions. The second is to apply the knowledge game as an informative test to discriminate between conscious (human) and conscious-less agents (zombies and robots), depending on which version of the game they can win. And the third is to use a version of the knowledge game to provide an answer to Dretske’s question “how do you know you are not a zombie?”.  相似文献   

17.
下一个购物篮推荐是当前电子商务领域中极其重要的一项任务,传统的下一个购物篮推荐方法主要分为时序推荐模型和总体推荐模型。这些方法对点击、收藏、加入购物车等用户的隐性反馈行为利用得不够,并且没有考虑用户行为偏好的时间敏感性。该文提出了一种基于用户隐性反馈行为的下一个购物篮推荐方法,将用户行为按照一定的时间窗口进行划分,对于每个窗口从多个维度抽取用户对商品的时序偏好特征,运用深度学习领域的卷积神经网络模型进行分类器训练。在真实数据集中的实验结果表明,与传统的线性模型和树模型等分类器相比,该文提出的卷积神经网络框架具有较强的特征萃取能力和泛化能力,提高了推荐系统的用户满意度。  相似文献   

18.
Permutation flow shop scheduling (PFSP) is among the most studied scheduling settings. In this paper, a hybrid Teaching–Learning-Based Optimization algorithm (HTLBO), which combines a novel teaching–learning-based optimization algorithm for solution evolution and a variable neighborhood search (VNS) for fast solution improvement, is proposed for PFSP to determine the job sequence with minimization of makespan criterion and minimization of maximum lateness criterion, respectively. To convert the individual to the job permutation, a largest order value (LOV) rule is utilized. Furthermore, a simulated annealing (SA) is adopted as the local search method of VNS after the shaking procedure. Experimental comparisons over public PFSP test instances with other competitive algorithms show the effectiveness of the proposed algorithm. For the DMU problems, 19 new upper bounds are obtained for the instances with makespan criterion and 88 new upper bounds are obtained for the instances with maximum lateness criterion.  相似文献   

19.
Several parameter estimation problems (or “inverse” problems) such as those that occur in hydrology and geophysics are solved using partial differential equation (PDE)-based models of the physical system in question. Likewise, these problems are usually underdetermined due to the lack of enough data to constrain a unique solution. In this paper, we present a framework for the solution of underdetermined inverse problems using COMSOL Multiphysics (formerly FEMLAB) that is applicable to a broad range of physical systems governed by PDEs. We present a general adjoint state formulation which may be used in this framework and allows for faster calculation of sensitivity matrices in a variety of commonly encountered underdetermined problems. The aim of this approach is to provide a platform for the solution of inverse problems that is efficient, flexible, and not restricted to one particular scientific application.We present an example application of this framework on a synthetic underdetermined inverse problem in aquifer characterization, and present numerical results on the accuracy and efficiency of this method. Our results indicate that our COMSOL-based routines provide an accurate, flexible, and scalable method for the solution of PDE-based inverse problems.  相似文献   

20.
In this paper, we develop a theoretical framework for characterizing shapes by building blocks. We address two questions: First, how do shape correspondences induce building blocks? For this, we introduce a new representation for structuring partial symmetries (partial self‐correspondences), which we call “microtiles”. Starting from input correspondences that form point‐wise equivalence relations, microtiles are obtained by grouping connected components of points that share the same set of symmetry transformations. The decomposition is unique, requires no parameters beyond the input correspondences, and encodes the partial symmetries of all subsets of the input. The second question is: What is the class of shapes that can be assembled from these building blocks? Here, we specifically consider r‐similarity as correspondence model, i.e., matching of local r‐neighborhoods. Our main result is that the microtiles of the partial r‐symmetries of an object S can build all objects that are (r+ε)‐similar to S for any ε >0. Again, the construction is unique. Furthermore, we give necessary conditions for a set of assembly rules for the pairwise connection of tiles. We describe a practical algorithm for computing microtile decompositions under rigid motions, a corresponding prototype implementation, and conduct a number of experiments to visualize the structural properties in practice.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号