首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 671 毫秒
1.
Process plant models, which feature their intrinsical complex topological relation, are important industrial art works in the field of Computer-Aided Design (CAD). This paper investigates the topology authentication problem for process plant models. Compared with the widely studied watermarking based geometrical information protection and authentication techniques for traditional mechanical CAD drawings, topology authentication is still in its infancy and offers very interesting potentials for improvements. A semi-fragile watermarking based algorithm is proposed to address this interesting issue in this paper. We encode the topological relation among joint plant components into the watermark bits based on the hamming code. A subset of the model’s connection points are selected as mark points for watermark embedding. Then those topology sensitive watermark bits are embedded into selected mark points via bit substitution. Theoretical analysis and experimental results demonstrate that our approach yields a strong ability in detecting and locating malicious topology attacks while achieves robustness against various non-malicious attacks.  相似文献   

2.
Healthy living is increasingly urgent, since it may help curb our escalating health care costs. ICT (Information and Communication Technology) offers healthy lifestyle support opportunities. Given the dose–response relation between health behaviors and -benefits, an important challenge is: how to motivate participants to large health behavior improvements? We conducted two design and test iterations to add ICT support to traditional face to face coach formats: first web based eDashboarding. Next: integrating existing best of breed mobile smart phone apps (mApps) in the coach relationship. We extract several design lessons regarding: how to design for a motivational virtuous circle, and when do Web or mApps add benefits for users? An important finding is that eCoach solutions add more value when part of an overall health coach relationship. For the future, we anticipate more intelligent mobile applications for health behavior tracking and feedback, plus an increasing role for mApps in health provisioning.  相似文献   

3.
A. Drexl 《Computing》1988,40(1):1-8
The multiconstraint 0–1 knapsack problem encounters when deciding how to use a knapsack with multiple resource constraints. The problem is known to be NP-hard, thus a “good” algorithm for its optimal solution is very unlikely to exist. We show how the concept of simulated annealing may be used for solving this problem approximately. 57 data sets from literature demonstrate, that the algorithm converges very rapidly towards the optimum solution.  相似文献   

4.
Random number generators are a core component of heuristic search algorithms. They are used to build candidate solutions and reduce bias while transforming these solutions during the search. Despite their usefulness, random numbers also have drawbacks, as one cannot guarantee that all portions of the search space are covered by the search and must run an algorithm many times to statistically assess its behavior. Determine whether deterministic quasi-random sequences can be used as an alternative to pseudo-random numbers in feeding “randomness” into Hill Climbing searches addressing Software Engineering problems. We have designed and executed three experimental studies in which a Hill Climbing search was used to find solutions for two Software Engineering problems: software module clustering and requirement selection. The algorithm was executed using both pseudo-random numbers and three distinct quasi-random sequences (Faure, Halton, and Sobol). The software clustering problem was evaluated for 32 real-world instances and the requirement selection problem was addressed using 15 instances reused from previous research works. The experimental studies were chained to allow varying as few as possible experimental factors between any given study and its subsequent one. Results found by searches powered by distinct quasi-random sequences were compared to those produced by the pseudo-random search on a per instance basis. The comparison evaluated search efficiency (processing time required to run the search) and effectiveness (quality of results produced by the search). Contrary to previous findings observed in the context of other heuristic search algorithms, we found evidence that quasi-random sequences cannot outperform pseudo-random numbers regularly in Hill Climbing searches. Detailed statistical analysis is provided to support the evidence favoring pseudo-random numbers.  相似文献   

5.
In this paper, a novel image watermarking scheme has been presented, which is based on Divisive Normalization Transform, Discrete Wavelet Transform and Singular Value Decomposition. Through this paper an attempt has been made to solve the problem of statically and perceptually redundant wavelet coefficients, used during watermarking with the help of divisive normalization transform while maintaining the robustness and imperceptibility. Divisive Normalization Transform is an adaptive nonlinear image illustration in which all linear transform coefficient are divided by a weighted sum of coefficient amplitudes in a generalized neighbourhood. The idea of embedding the watermark image into singular values of divisively normalized coefficients of host image has been exploited. The proposed algorithm is providing the perceptually better-quality watermarked image and at the same time maintaining the robustness of watermarked algorithm. Thus the proposed watermarking algorithm is a semi-blind, image adaptive due to use of divisive normalization transform and suitable for rightful ownership. Various comparative results make the algorithm superior in terms of intentional and non-intentional attacks.  相似文献   

6.
The density based notion for clustering approach is used widely due to its easy implementation and ability to detect arbitrary shaped clusters in the presence of noisy data points without requiring prior knowledge of the number of clusters to be identified. Density-based spatial clustering of applications with noise (DBSCAN) is the first algorithm proposed in the literature that uses density based notion for cluster detection. Since most of the real data set, today contains feature space of adjacent nested clusters, clearly DBSCAN is not suitable to detect variable adjacent density clusters due to the use of global density parameter neighborhood radius N rad and minimum number of points in neighborhood N pts . So the efficiency of DBSCAN depends on these initial parameter settings, for DBSCAN to work properly, the neighborhood radius must be less than the distance between two clusters otherwise algorithm merges two clusters and detects them as a single cluster. Through this paper: 1) We have proposed improved version of DBSCAN algorithm to detect clusters of varying density adjacent clusters by using the concept of neighborhood difference and using the notion of density based approach without introducing much additional computational complexity to original DBSCAN algorithm. 2) We validated our experimental results using one of our authors recently proposed space density indexing (SDI) internal cluster measure to demonstrate the quality of proposed clustering method. Also our experimental results suggested that proposed method is effective in detecting variable density adjacent nested clusters.  相似文献   

7.
R. Jonker  A. Volgenant 《Computing》1987,38(4):325-340
We develop a shortest augmenting path algorithm for the linear assignment problem. It contains new initialization routines and a special implementation of Dijkstra's shortest path method. For both dense and sparse problems computational experiments show this algorithm to be uniformly faster than the best algorithms from the literature. A Pascal implementation is presented.  相似文献   

8.
Multidimensional knapsack problem (MKP) is known to be a NP-hard problem, more specifically a NP-complete problem, which cannot be resolved in polynomial time up to now. MKP can be applicable in many management, industry and engineering fields, such as cargo loading, capital budgeting and resource allocation, etc. In this article, using a combinational permutation constructed by the convex combinatorial value \(M_j=(1-\lambda ) u_j+ \lambda x^\mathrm{LP}_j\) of both the pseudo-utility ratios of MKP and the optimal solution \(x^\mathrm{LP}\) of relaxed LP, we present a new hybrid combinatorial genetic algorithm (HCGA) to address multidimensional knapsack problems. Comparing to Chu’s GA (J Heuristics 4:63–86, 1998), empirical results show that our new heuristic algorithm HCGA obtains better solutions over 270 standard test problem instances.  相似文献   

9.
Compared to Beowulf clusters and shared-memory machines, GPU and FPGA are emerging alternative architectures that provide massive parallelism and great computational capabilities. These architectures can be utilized to run compute-intensive algorithms to analyze ever-enlarging datasets and provide scalability. In this paper, we present four implementations of K-means data clustering algorithm for different high performance computing platforms. These four implementations include a CUDA implementation for GPUs, a Mitrion C implementation for FPGAs, an MPI implementation for Beowulf compute clusters, and an OpenMP implementation for shared-memory machines. The comparative analyses of the cost of each platform, difficulty level of programming for each platform, and the performance of each implementation are presented.  相似文献   

10.
11.
LetR be a rectangle and letP be a set of points located insideR. Our problem consists of introducing a set of line segments of least total length to partition the interior ofR into rectangles. Each rectangle in a valid partition must not contain points fromP as interior points. Since this partitioning problem is computationally intractable (NP-hard), we present efficient approximation algorithms for its solution. The solutions generated by our algorithms are guaranteed to be within three times the optimal solution value. Our algorithm also generates solutions within four times the optimal solution value whenR is a rectilinear polygon. Our algorithm can be generalized to generate good approximation solutions for the case whenR is a rectilinear polygon, there are rectilinear polygonal holes, and the sum of the length of the boundaries is not more than the sum of the length of the edges in an optimal solution.  相似文献   

12.
This article presents two new algorithms for finding the optimal solution of a Multi-agent Multi-objective Reinforcement Learning problem. Both algorithms make use of the concepts of modularization and acceleration by a heuristic function applied in standard Reinforcement Learning algorithms to simplify and speed up the learning process of an agent that learns in a multi-agent multi-objective environment. In order to verify performance of the proposed algorithms, we considered a predator-prey environment in which the learning agent plays the role of prey that must escape the pursuing predator while reaching for food in a fixed location. The results show that combining modularization and acceleration using a heuristics function indeed produced simplification and speeding up of the learning process in a complex problem when comparing with algorithms that do not make use of acceleration or modularization techniques, such as Q-Learning and Minimax-Q.  相似文献   

13.
To support program comprehension, software artifacts can be labeled—for example within software visualization tools—with a set of representative words, hereby referred to as labels. Such labels can be obtained using various approaches, including Information Retrieval (IR) methods or other simple heuristics. They provide a bird-eye’s view of the source code, allowing developers to look over software components fast and make more informed decisions on which parts of the source code they need to analyze in detail. However, few empirical studies have been conducted to verify whether the extracted labels make sense to software developers. This paper investigates (i) to what extent various IR techniques and other simple heuristics overlap with (and differ from) labeling performed by humans; (ii) what kinds of source code terms do humans use when labeling software artifacts; and (iii) what factors—in particular what characteristics of the artifacts to be labeled—influence the performance of automatic labeling techniques. We conducted two experiments in which we asked a group of students (38 in total) to label 20 classes from two Java software systems, JHotDraw and eXVantage. Then, we analyzed to what extent the words identified with an automated technique—including Vector Space Models, Latent Semantic Indexing (LSI), latent Dirichlet allocation (LDA), as well as customized heuristics extracting words from specific source code elements—overlap with those identified by humans. Results indicate that, in most cases, simpler automatic labeling techniques—based on the use of words extracted from class and method names as well as from class comments—better reflect human-based labeling. Indeed, clustering-based approaches (LSI and LDA) are more worthwhile to be used for source code artifacts having a high verbosity, as well as for artifacts requiring more effort to be manually labeled. The obtained results help to define guidelines on how to build effective automatic labeling techniques, and provide some insights on the actual usefulness of automatic labeling techniques during program comprehension tasks.  相似文献   

14.
15.
T. Shermer 《Computing》1989,42(2-3):109-131
A hidden set is a set of points such that no two points in the set are visible to each other. A hidden guard set is a hidden set which is also a guard set. In this paper we consider the problem for finding hidden sets and hidden guard sets in and around polygons. In particular, we establish bounds on the maximum size of hidden sets, and show that the problem of finding a maximum hidden set is NP-hard. For minimum hidden guard sets, similar results are obtained.  相似文献   

16.
The lithium-ion battery cycle life prediction with particle filter (PF) depends on the physical or empirical model. However, in observation equation based on model, the adaptability and accuracy for individual battery under different operating conditions are not fully considered. Therefore, a novel fusion prognostic framework is proposed, in which the data-driven time series prediction model is adopted as observation equation, and combined to PF algorithm for lithium-ion battery cycle life prediction. Firstly, the nonlinear degradation feature of the lithium-ion battery capacity degradation is analyzed, and then, the nonlinear accelerated degradation factor is extracted to improve prediction ability of linear AR model. So an optimized nonlinear degradation autoregressive (ND–AR) time series model for remaining useful life (RUL) estimation of lithium-ion batteries is introduced. Then, the ND–AR model is used to realize multi-step prediction of the battery capacity degradation states. Finally, to improve the uncertainty representation ability of the standard PF algorithm, the regularized particle filter is applied to design a fusion RUL estimation framework of lithium-ion battery. Experimental results with the lithium-ion battery test data from NASA and CALCE (The Center for Advanced Life Cycle Engineering, the University of Maryland) show that the proposed fusion prognostic approach can effectively predict the battery RUL with more accurate forecasting result and uncertainty representation of probability density distribution (pdf).  相似文献   

17.
Dipen Moitra 《Algorithmica》1991,6(1-6):624-657
Given a black-and-white image, represented by an array of √n × √n binary-valued pixels, we wish to cover the black pixels with aminimal set of (possibly overlapping) maximal squares. It was recently shown that obtaining aminimum square cover for a polygonal binary image with holes is NP-hard. We derive an optimal parallel algorithm for theminimal square cover problem, which for any desired computation timeT in [logn,n] runs on an EREW-PRAM with (n/T) processors. The cornerstone of our algorithm is a novel data structure, the cover graph, which compactly represents the covering relationships between the maximal squares of the image. The size of the cover graph is linear in the number of pixels. This algorithm has applications to problems in VLSI mask generation, incremental update of raster displays, and image compression.  相似文献   

18.
The most general flowshop scheduling problem is also addressed in the literature as non-permutation flowshop (NPFS). Current processors are able to cope with the \((n!)^{m}\) combinatorial complexity of NPFS scheduling by metaheuristics. After briefly discussing the requirements for a manufacturing layout to be designed and modeled as non-permutation flowshop, a disjunctive graph (digraph) approach is used to build native solutions. The implementation of an Ant Colony Optimization (ACO) algorithm has been described in detail; it has been shown how the biologically inspired mechanisms produce eligible schedules, as opposed to most metaheuristics approaches, which improve permutation solutions. ACO algorithms are an example of native non-permutation (NNP) solutions of the flowshop scheduling problem, opening a new perspective on building purely native approaches. The proposed NNP-ACO has been assessed over existing native approaches improving most makespan upper bounds of the benchmark problems from Demirkol et al. (1998).  相似文献   

19.
In this paper, a real-time stochastic optimal control method of traffic signal is modified. In addition, H-GA-PSO algorithm is proposed to search optimal traffic signals based on the stochastic model. The H-GA-PSO algorithm is a modified Hierarchical Particle Swarm Optimization (H-PSO) algorithm based on Genetic Algorithm (GA) processing. Finally, the effectiveness of the stochastic optimal control method with H-GA-PSO algorithm is shown through simulations at multiple intersections using a micro-traffic simulator.  相似文献   

20.
In this paper we propose an adaptive genetic algorithm that produces good quality solutions to the time dependent inventory routing problem (TDIRP) in which inventory control and time dependent vehicle routing decisions for a set of retailers are made simultaneously over a specific planning horizon. This work is motivated by the effect of dynamic traffic conditions in an urban context and the resulting inventory and transportation costs. We provide a mixed integer programming formulation for TDIRP. Since finding the optimal solutions for TDIRP is a NP-hard problem, an adaptive genetic algorithm is applied. We develop new genetic representation and design suitable crossover and mutation operators for the improvement phase. We use adaptive genetic operator proposed by Yun and Gen (Fuzzy Optim Decis Mak 2(2):161–175, 2003) for the automatic setting of the genetic parameter values. The comparison of results shows the significance of the designed AGA and demonstrates the capability of reaching solutions within 0.5 % of the optimum on sets of test problems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号