首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 109 毫秒
1.
The paper presents a survey of methods for constructing covering arrays used in generation of tests for interfaces with a great number of parameters. The application domain of these methods and algorithms used in them are analyzed. Specific characteristics of the methods, including time complexity and estimates of the required memory, are presented. Various—direct, recursive, optimization, genetic, and backtracking—algorithms used for constructing covering arrays are presented. Heuristics are presented that allow one to reduce arrays without loss of completeness, and application domains of these heuristics are outlined.  相似文献   

2.
The paper research is concerned with enabling parallel, high-performance computation—in particular development of scientific software in the network-aware programming language, Java. Traditionally, this kind of computing was done in Fortran. Arguably, Fortran is becoming a marginalized language, with limited economic incentive for vendors to produce modern development environments, optimizing compilers for new hardware, or other kinds of associated software expected of by today’s programmers. Hence, Java looks like a very promising alternative for the future. The paper will discuss in detail a particular environment called HPJava. HPJava is the environment for parallel programming—especially data-parallel scientific programming—in Java. Our HPJava is based around a small set of language extensions designed to support parallel computation with distributed arrays, plus a set of communication libraries. A high-level communication API, Adlib, is developed as an application level communication library suitable for our HPJava. This communication library supports collective operations on distributed arrays. We include Java Object as one of the Adlib communication data types. So we fully support communication of intrinsic Java types, including primitive types, and Java object types.  相似文献   

3.
Concept learning provides a natural framework in which to place the problems solved by the quantum algorithms of Bernstein-Vazirani and Grover. By combining the tools used in these algorithms—quantum fast transforms and amplitude amplification—with a novel (in this context) tool—a solution method for geometrical optimization problems—we derive a general technique for quantum concept learning. We name this technique “Amplified Impatient Learning” and apply it to construct quantum algorithms solving two new problems: Battleship and Majority, more efficiently than is possible classically.  相似文献   

4.
Large scale, multidisciplinary, engineering designs are always difficult due to the complexity and dimensionality of these problems. Direct coupling between the analysis codes and the optimization routines can be prohibitively time consuming due to the complexity of the underlying simulation codes. One way of tackling this problem is by constructing computationally cheap(er) approximations of the expensive simulations that mimic the behavior of the simulation model as closely as possible. This paper presents a data driven, surrogate-based optimization algorithm that uses a trust region-based sequential approximate optimization (SAO) framework and a statistical sampling approach based on design of experiment (DOE) arrays. The algorithm is implemented using techniques from two packages—SURFPACK and SHEPPACK that provide a collection of approximation algorithms to build the surrogates and three different DOE techniques—full factorial (FF), Latin hypercube sampling, and central composite design—are used to train the surrogates. The results are compared with the optimization results obtained by directly coupling an optimizer with the simulation code. The biggest concern in using the SAO framework based on statistical sampling is the generation of the required database. As the number of design variables grows, the computational cost of generating the required database grows rapidly. A data driven approach is proposed to tackle this situation, where the trick is to run the expensive simulation if and only if a nearby data point does not exist in the cumulatively growing database. Over time the database matures and is enriched as more and more optimizations are performed. Results show that the proposed methodology dramatically reduces the total number of calls to the expensive simulation runs during the optimization process.  相似文献   

5.
We develop a simple mapping technique to design linear systolic arrays. The basic idea of our technique is to map the computations of a certain class of two-dimensional systolic arrays onto one-dimensional arrays. Using this technique, systolic algorithms are derived for problems such as matrix multiplication and transitive closure on linearly connected arrays of PEs with constant I/O bandwidth. Compared to known designs in the literature, our technique leads to modular systolic arrays with constant hardware in each PE, few control lines, lexicographic data input/output, and improved delay time. The unidirectional flow of control and data in our design assures implementation of the linear array in the known fault models of Wafer Scale Integration.  相似文献   

6.
Most existing studies of 2D problems in structural topology optimization are based on a given (limit on the) volume fraction or some equivalent formulation. The present note looks at simultaneous optimization with respect to both topology and volume fraction, termed here “extended optimality”. It is shown that the optimal volume fraction in such problems — in extreme cases — may be unity or may also tend to zero. The proposed concept is used for explaining certain “quasi-2D” solutions and an extension to 3D systems is also suggested. Finally, the relevance of Voigt’s bound to extended optimality is discussed.  相似文献   

7.
In this paper, we consider the implementation of a product c=A b, where A is N 1×N 3 band matrix with bandwidth ω and b is a vector of size N 3×1, on bidirectional and unidirectional linear systolic arrays (BLSA and ULSA, respectively). We distinguish the cases when the matrix bandwidth ω is 1≤ωN 3 and N 3ωN 1+N 3−1. A modification of the systolic array synthesis procedure based on data dependencies and space-time transformations of data dependency graph is proposed. The modification enables obtaining both BLSA and ULSA with an optimal number of processing elements (PEs) regardless of the matrix bandwidth. The execution time of the synthesized arrays has been minimized. We derive explicit formulas for the synthesis of these arrays. The performances of the designed arrays are discussed and compared to the performances of the arrays obtained by the standard design procedure.  相似文献   

8.

Systolic algorithms for preconditioned iterative procedures are described and discussed in relation to existing systolic arrays for iterative methods applied to the systems of linear equations arising from the solution of partial differential equations. In boundary value problems it is shown that if the cost of the preconditioned systolic arrays in terms of hardware is related to the (standard) iterative methods, then savings in the number of array cells can be made when the system involved is large and sparse (narrow bandwidth) with a significant improvement in convergence rate.  相似文献   

9.
While it has been suggested that patterning activities support early algebra learning, it is widely acknowledged that the shift from perceiving patterns to understanding algebraic functions—and correspondingly, from reporting empirical patterns to providing explanations—is difficult. This paper reports on the collaborations of grade 4 students (n = 68) from three classrooms in diverse urban settings, connected through a knowledge-building environment (Knowledge Forum), when solving mathematical generalizing problems as part of an early algebra research project. The purpose of this study was to investigate the underlying principles of idea improvement and epistemic agency and the potential of knowledge building—as supported by Knowledge Forum—to support student work. Our analyses of student-generated collaborative workspaces revealed that students were able to find multiple rules for challenging problems and revise their own conjectures regarding those rules. Furthermore, the discourse was sustained over 8 weeks and students were able to find similarities across problem types without the support of teachers or researchers, suggesting that these grade-4 students had developed a disposition for evidence use and justification that eludes much older students.  相似文献   

10.
Ed Blakey 《Natural computing》2011,10(4):1245-1259
Unconventional computers—which may, for example, exploit chemical/analogue/quantum phenomena in order to compute, rather than electronically implementing discrete logic gates—are widely studied in both theoretical and practical contexts. One particular motivation behind unconventional computation is the desire efficiently to solve classically difficult problems—we recall chemical-computer attempts at solving NP -complete problems such as the Travelling Salesperson Problem—, with computational complexity theory offering the criteria for judging this efficiency. However, care must be taken here; conventional (Turing-machine-style) complexity analysis is not always appropriate for unconventional computers: new, non-standard computational resources, with correspondingly new complexity measures, are often required. Accordingly, we discuss in the present paper various resources beyond merely time and space (and, indeed, discuss various interpretations of the term ‘resource’ itself), advocating such resources’ consideration when analysing the complexity of unconventional computers. We hope that this acts as a useful starting point for practitioners of unconventional computing and computational complexity.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号