首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Covering arrays are used for generating tests for interfaces with a large number of parameters. In this paper, a new method is described for constructing homogeneous and heterogeneous covering arrays that is based on a combination of combinatorial and optimization methods. In a wide class of particular cases, the method speeds up the construction of arrays several times (depending on a particular case) compared with well-known, widely used optimization methods. In most cases, the sizes of the arrays obtained are approximately the same as those of the arrays constructed by other optimization methods; in a number of particular cases, one could obtain arrays that are smaller by 5–15%. The application range of the new method is analyzed.  相似文献   

2.
Practical Combinatorial Testing: Beyond Pairwise   总被引:2,自引:0,他引:2  
With new algorithms and tools, developers can apply high-strength combinatorial testing to detect elusive failures that occur only when multiple components interact. In pairwise testing, all possible pairs of parameter values are covered by at least one test, and good tools are available to generate arrays with the value pairs. In the past few years, advances in covering-array algorithms, integrated with model checking or other testing approaches, have made it practical to extend combinatorial testing beyond pairwise tests. The US National Institute of Standards and Technology (NIST) and the University of Texas, Arlington, are now distributing freely available methods and tools for constructing large t-way combination test sets (known as covering arrays), converting covering arrays into executable tests, and automatically generating test oracles using model checking (http://csrc.nist.gov/acts). In this review, we focus on real-world problems and empirical results from applying these methods and tools.  相似文献   

3.
覆盖表生成问题是组合测试的重要研究内容之一,目前已有许多数学方法、贪心算法、搜索算法用于求解这一问题.蚁群算法作为一种能够有效求解组合优化问题的演化搜索算法,已被应用到求解覆盖表生成问题中.已有的研究工作表明:蚁群算法适于求解一般覆盖表、变力度覆盖表生成以及覆盖表排序等问题,但算法结果与其他覆盖表生成方法相比并不具有优势.为了进一步探索与挖掘蚁群算法生成覆盖表的潜力,进行了如下4个层次的改进工作:(1)算法变种集成;(2)算法参数配置优化;(3)演化对象结构调整及演化策略改进;(4)利用并行计算优化算法时间开销.实验结果表明:通过以上4个层次的改进,蚁群算法生成覆盖表的性能有了显著提升.  相似文献   

4.
Large scale, multidisciplinary, engineering designs are always difficult due to the complexity and dimensionality of these problems. Direct coupling between the analysis codes and the optimization routines can be prohibitively time consuming due to the complexity of the underlying simulation codes. One way of tackling this problem is by constructing computationally cheap(er) approximations of the expensive simulations that mimic the behavior of the simulation model as closely as possible. This paper presents a data driven, surrogate-based optimization algorithm that uses a trust region-based sequential approximate optimization (SAO) framework and a statistical sampling approach based on design of experiment (DOE) arrays. The algorithm is implemented using techniques from two packages—SURFPACK and SHEPPACK that provide a collection of approximation algorithms to build the surrogates and three different DOE techniques—full factorial (FF), Latin hypercube sampling, and central composite design—are used to train the surrogates. The results are compared with the optimization results obtained by directly coupling an optimizer with the simulation code. The biggest concern in using the SAO framework based on statistical sampling is the generation of the required database. As the number of design variables grows, the computational cost of generating the required database grows rapidly. A data driven approach is proposed to tackle this situation, where the trick is to run the expensive simulation if and only if a nearby data point does not exist in the cumulatively growing database. Over time the database matures and is enriched as more and more optimizations are performed. Results show that the proposed methodology dramatically reduces the total number of calls to the expensive simulation runs during the optimization process.  相似文献   

5.
王燕  聂长海  钮鑫涛  吴化尧  徐家喜 《软件学报》2018,29(12):3665-3691
组合测试可以有效检测待测系统中由参数间交互作用而引发的故障.在其30多年的发展过程中,覆盖表生成一直是关键问题之一,相关研究文献已达200多篇.作为一种有效的覆盖表生成算法,已有的禁忌搜索算法在所生成的覆盖表规模上具备一定的优势,但其解的质量和运算速度仍有提升空间;同时,这些算法实际应用能力较差,既不支持约束处理,也无法生成可变力度覆盖表.针对以上问题,提出了一种禁忌搜索算法.该算法从3个方面对已有的算法进行了改进:1)算法参数配置调优分pair-wise和爬山两阶段进行,确保使用较少配置条数最大程度击中最优配置,进一步提高算法生成覆盖表的规模;2)进行算法并行化,加速算法生成覆盖表的速度;3)增加约束处理和变力度处理,使算法可适应多种测试场景.实验结果表明,该算法在固定力度、变力度、带约束等多种类型覆盖表的规模上都具有一定优势,同时,并行化使算法平均加速2.6倍左右.  相似文献   

6.
Abstract. The construction of full-text indexes on very large text collections is nowadays a hot problem. The suffix array [32] is one of the most attractive full-text indexing data structures due to its simplicity, space efficiency and powerful/ fast search operations supported. In this paper we analyze, both theoretically and experimentally, the I/ O complexity and the working space of six algorithms for constructing large suffix arrays. Three of them are state-of-the-art, the other three algorithms are our new proposals. We perform a set of experiments based on three different data sets (English texts, amino-acid sequences and random texts) and give a precise hierarchy of these algorithms according to their working-space versus construction-time tradeoff. Given the current trends in model design [12], [32] and disk technology [29], [30], we pose particular attention to differentiate between ``random' and ``contiguous' disk accesses, in order to explain reasonably some practical I/ O phenomena which are related to the experimental behavior of these algorithms and that would otherwise be meaningless in the light of other simpler external-memory models. We also address two other issues. The former is concerned with the problem of building word indexes; we show that our results can be successfully applied to this case too, without any loss in efficiency and without compromising the simplicity of programming to achieve a uniform, simple and efficient approach to both the two indexing models. The latter issue is related to the intriguing and apparently counterintuitive ``contradiction' between the effective practical performance of the well-known Baeza-Yates—Gonnet—Snider algorithm [17], verified in our experiments, and its unappealing worst-case behavior. We devise a new external-memory algorithm that follows the basic philosophy underlying that algorithm but in a significantly different manner, thus resulting in a novel approach which combines good worst-case bounds with efficient practical performance.  相似文献   

7.
Concept learning provides a natural framework in which to place the problems solved by the quantum algorithms of Bernstein-Vazirani and Grover. By combining the tools used in these algorithms—quantum fast transforms and amplitude amplification—with a novel (in this context) tool—a solution method for geometrical optimization problems—we derive a general technique for quantum concept learning. We name this technique “Amplified Impatient Learning” and apply it to construct quantum algorithms solving two new problems: Battleship and Majority, more efficiently than is possible classically.  相似文献   

8.
The systolic processing offers the possibility of solving a large number of standard problems on multicellular computing devices with autonomous cells (Processing Elements—PEs). The resulting systolic arrays exploit the underlying parallelism of many computationally intensive problems and offer a vital and effective way of handling them. Advances in technology and especially in VLSI and FPGA have an ongoing contribution to the evolution of systolic arrays. Herein, a FPGA-based Systolic array prototype implementing the Factorization stage of the Quadrant Interlocking Factorization—QIF (Butterfly) method is presented and the corresponding time-complexities achieved are discussed.  相似文献   

9.
求解正交数组问题的拟物拟人算法   总被引:2,自引:0,他引:2  
此工作是方开泰工作的继续,正交数组在制造业和高技术产业的试验中有着广泛的应用,目前正交数组构造的研究相当活跃,现有的许多构造方法很复杂且所能构造的类型有限。提出了一个构造正交数组简单而效的方法-拟物拟人算法,应用该算法已经独立地得到了一些历史上尚未发现的L27(3^13)的不同构数组,希望该算法经过进一步发展后将能设计出许多新的正交数组。  相似文献   

10.
11.
The wavelet decomposition is a proven tool for constructing concise synopses of large data sets that can be used to obtain fast approximate answers. Existing research studies focus on selecting an optimal set of wavelet coefficients to store so as to minimize some error metric, without however seeking to reduce the size of the wavelet coefficients themselves. In many real data sets the existence of large spikes in the data values results in many large coefficient values lying on paths of a conceptual tree structure known as the error tree. To exploit this fact, we introduce in this paper a novel compression scheme for wavelet synopses, termed hierarchically compressed wavelet synopses, that fully exploits hierarchical relationships among coefficients in order to reduce their storage. Our proposed compression scheme allows for a larger number of coefficients to be stored for a given space constraint thus resulting in increased accuracy of the produced synopsis. We propose optimal, approximate and greedy algorithms for constructing hierarchically compressed wavelet synopses that minimize the sum squared error while not exceeding a given space budget. Extensive experimental results on both synthetic and real-world data sets validate our novel compression scheme and demonstrate the effectiveness of our algorithms against existing synopsis construction algorithms. This work has been funded by the project PENED 2003. The project is co-financed 75% of public expenditure through EC—European Social Fund, 25% of public expenditure through Ministry of Development—General Secretariat of Research and Technology and through private sector, under measure 8.3 of OPERATIONAL PROGRAMME “COMPETITIVENESS" in the 3rd Community Support Programme.  相似文献   

12.
During the last decades, simulation software based on the Finite Element Method (FEM) has significantly contributed to the design of feasible forming processes. Coupling FEM to mathematical optimization algorithms offers a promising opportunity to design optimal metal forming processes rather than just feasible ones. In this paper Sequential Approximate Optimization (SAO) for optimizing forging processes is discussed. The algorithm incorporates time-consuming nonlinear FEM simulations. Three variants of the SAO algorithm—which differ by their sequential improvement strategies—have been investigated and compared to other optimization algorithms by application to two forging processes. The other algorithms taken into account are two iterative algorithms (BFGS and SCPIP) and a Metamodel Assisted Evolutionary Strategy (MAES). It is essential for sequential approximate optimization algorithms to implement an improvement strategy that uses as much information obtained during previous iterations as possible. If such a sequential improvement strategy is used, SAO provides a very efficient algorithm to optimize forging processes using time-consuming FEM simulations.  相似文献   

13.
Available methods of constructing Bayesian networks with the use of scoring functions are analyzed. The Cooper-Herskovits and MDL functions are described in detail and used to compare algorithms of constructing Bayesian networks. __________ Translated from Kibernetika i Sistemnyi Analiz, No. 2, pp. 81–88, March–April 2008.  相似文献   

14.
VLSI technology has had tremendous success in revolutionizing computer design with processor arrays. Local communication and interconnection is a constraint that dictates the design of processor arrays. The shared bus and global access to memory are now no longer used, since they lower the speed. Consequently, parallel algorithms must be designed according to these constraints.

One of the problems that must be resolved for the above mentioned constraints is data broadcast elimination. Algorithms must be transformed into a form that uses data propagation instead of data broadcast.

Here systems of affine recurrence equations are analyzed and data broadcast is denned in context of the definition of data dependence and affine recurrence equations. A method for data broadcast elimination is introduced in [1] and expands the system of affine recurrence equations into new recurrence equations, that define data propagation and eliminates the data dependences where data broadcast occurs.

Parallel algorithms are usually given as a set of similar tasks repetitively performed on different data. The iteration form of presenting the algorithms is most common. Several techniques are introduced to transform the algorithm to a single assignment form of recurrence equations.

Some improvements of these techniques are presented to make the application of the data broadcast elimination method easier and more straight forward. The presented techniques are classified as the transformation of iterative algorithms to a recurrence form, the transformation of recurrence form to a single assignment form, and fulfilling the index forms of the algorithms.

A system of affine recurrence equations with the data broadcast property is always obtained by applying these procedures. The method of data broadcast elimination successfully transforms this system of affine recurrence equations into a system of uniform recurrence equations which can be used for parallel implementation on VLSI processor arrays.  相似文献   

15.
In this paper, real-time algorithms for constructing the adjacency graph and the spatial geometric relation between contrast objects of color images are proposed, as well as methods of global image analysis based on them, within the scope of the theory developed by the author. Image analysis is conducted based on the graph of color bunches STG and the bipartite graph LRG of left and right contrast boundary curves (germs of contrast global objects) in STG, introduced by the author. An essential point is that in each layer of this graph a linearly ordered covering constituted of “basic” color bunches is selected. Based on this covering, a search lattice for solving global problems of image analysis is constructed. The obtained results are applied to finding complex objects in images. In particular, they are applied to the analysis of road scenes. The developed methods are implemented in the form of a program complex. The results of its operation on video sequences taken from a moving vehicle are presented and discussed. The application of the developed technique to the navigation of autonomous robots is also considered.  相似文献   

16.
The numerical solution of time-dependent ordinary and partial differential equations presents a number of well known difficulties—including, possibly, severe restrictions on time-step sizes for stability in explicit procedures, as well as need for solution of challenging, generally nonlinear systems of equations in implicit schemes. In this note we introduce a novel class of explicit methods based on use of one-dimensional Padé approximation. These schemes, which are as simple and inexpensive per time-step as other explicit algorithms, possess, in many cases, properties of stability similar to those offered by implicit approaches. We demonstrate the character of our schemes through application to notoriously stiff systems of ODEs and PDEs. In a number of important cases, use of these algorithms has resulted in orders-of-magnitude reductions in computing times over those required by leading approaches.  相似文献   

17.
inverse subdivision algorithms , with linear time and space complexity, to detect and reconstruct uniform Loop, Catmull–Clark, and Doo–Sabin subdivision structure in irregular triangular, quadrilateral, and polygonal meshes. We consider two main applications for these algorithms. The first one is to enable interactive modeling systems that support uniform subdivision surfaces to use popular interchange file formats which do not preserve the subdivision structure, such as VRML, without loss of information. The second application is to improve the compression efficiency of existing lossless connectivity compression schemes, by optimally compressing meshes with Loop subdivision connectivity. Our Loop inverse subdivision algorithm is based on global connectivity properties of the covering mesh, a concept motivated by the covering surface from Algebraic Topology. Although the same approach can be used for other subdivision schemes, such as Catmull–Clark, we present a Catmull–Clark inverse subdivision algorithm based on a much simpler graph-coloring algorithm and a Doo–Sabin inverse subdivision algorithm based on properties of the dual mesh. Straightforward extensions of these approaches to other popular uniform subdivision schemes are also discussed. Published online: 3 July 2002  相似文献   

18.
Metamodeling using extended radial basis functions: a comparative approach   总被引:1,自引:1,他引:1  
The process of constructing computationally benign approximations of expensive computer simulation codes, or metamodeling, is a critical component of several large-scale multidisciplinary design optimization (MDO) approaches. Such applications typically involve complex models, such as finite elements, computational fluid dynamics, or chemical processes. The decision regarding the most appropriate metamodeling approach usually depends on the type of application. However, several newly proposed kernel-based metamodeling approaches can provide consistently accurate performance for a wide variety of applications. The authors recently proposed one such novel and effective metamodeling approach—the extended radial basis function (E-RBF) approach—and reported highly promising results. To further understand the advantages and limitations of this new approach, we compare its performance to that of the typical RBF approach, and another closely related method—kriging. Several test functions with varying problem dimensions and degrees of nonlinearity are used to compare the accuracies of the metamodels using these metamodeling approaches. We consider several performance criteria such as metamodel accuracy, effect of sampling technique, effect of sample size, effect of problem dimension, and computational complexity. The results suggest that the E-RBF approach is a potentially powerful metamodeling approach for MDO-based applications, as well as other classes of computationally intensive applications.  相似文献   

19.
The paper research is concerned with enabling parallel, high-performance computation—in particular development of scientific software in the network-aware programming language, Java. Traditionally, this kind of computing was done in Fortran. Arguably, Fortran is becoming a marginalized language, with limited economic incentive for vendors to produce modern development environments, optimizing compilers for new hardware, or other kinds of associated software expected of by today’s programmers. Hence, Java looks like a very promising alternative for the future. The paper will discuss in detail a particular environment called HPJava. HPJava is the environment for parallel programming—especially data-parallel scientific programming—in Java. Our HPJava is based around a small set of language extensions designed to support parallel computation with distributed arrays, plus a set of communication libraries. A high-level communication API, Adlib, is developed as an application level communication library suitable for our HPJava. This communication library supports collective operations on distributed arrays. We include Java Object as one of the Adlib communication data types. So we fully support communication of intrinsic Java types, including primitive types, and Java object types.  相似文献   

20.
POP: Patchwork of Parts Models for Object Recognition   总被引:2,自引:0,他引:2  
We formulate a deformable template model for objects with an efficient mechanism for computation and parameter estimation. The data consists of binary oriented edge features, robust to photometric variation and small local deformations. The template is defined in terms of probability arrays for each edge type. A primary contribution of this paper is the definition of the instantiation of an object in terms of shifts of a moderate number local submodels—parts—which are subsequently recombined using a patchwork operation, to define a coherent statistical model of the data. Object classes are modeled as mixtures of patchwork of parts POP models that are discovered sequentially as more class data is observed. We define the notion of the support associated to an instantiation, and use this to formulate statistical models for multi-object configurations including possible occlusions. All decisions on the labeling of the objects in the image are based on comparing likelihoods. The combination of a deformable model with an efficient estimation procedure yields competitive results in a variety of applications with very small training sets, without need to train decision boundaries—only data from the class being trained is used. Experiments are presented on the MNIST database, reading zipcodes, and face detection.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号