首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 587 毫秒
1.
The paper considers parallel (synchronous) array sorting algorithms obtained by parallelizing the corresponding sequential sorting algorithms. A classification of synchronous sorting algorithms, based on the sequential scheme, is proposed. A number of new efficient synchronous array sorting algorithms are developed.Translated from Kibernetika, No. 6, pp. 67–74, November–December, 1989.  相似文献   

2.
Clustering is a popular data analysis and data mining technique. A popular technique for clustering is based on k-means such that the data is partitioned into K clusters. However, the k-means algorithm highly depends on the initial state and converges to local optimum solution. This paper presents a new hybrid evolutionary algorithm to solve nonlinear partitional clustering problem. The proposed hybrid evolutionary algorithm is the combination of FAPSO (fuzzy adaptive particle swarm optimization), ACO (ant colony optimization) and k-means algorithms, called FAPSO-ACO–K, which can find better cluster partition. The performance of the proposed algorithm is evaluated through several benchmark data sets. The simulation results show that the performance of the proposed algorithm is better than other algorithms such as PSO, ACO, simulated annealing (SA), combination of PSO and SA (PSO–SA), combination of ACO and SA (ACO–SA), combination of PSO and ACO (PSO–ACO), genetic algorithm (GA), Tabu search (TS), honey bee mating optimization (HBMO) and k-means for partitional clustering problem.  相似文献   

3.
The paper presents the results of some studies of logical-structural modeling methods which have led to formalization of parallel-sequential algorithms for synthesis of structured images of technical systems. The studies have produced a mathematical model of hyperintelligence for interaction with a team of developers.Translated from Kibernetika, No. 6, pp. 36–42, November–December, 1989.  相似文献   

4.
Learning Nested Differences of Intersection-Closed Concept Classes   总被引:1,自引:1,他引:0  
This paper introduces a new framework for constructing learning algorithms. Our methods involve master algorithms which use learning algorithms for intersection-closed concept classes as subroutines. For example, we give a master algorithm capable of learning any concept class whose members can be expressed as nested differences (for example, c 1 – (c 2 – (c 3 – (c 4c 5)))) of concepts from an intersection-closed class. We show that our algorithms are optimal or nearly optimal with respect to several different criteria. These criteria include: the number of examples needed to produce a good hypothesis with high confidence, the worst case total number of mistakes made, and the expected number of mistakes made in the firstt trials.  相似文献   

5.
We propose strictly polynomial-time algorithms for a number of network problems: (a) strength and reinforcement of a network. (b) attack and density of a network. (c) packing, covering, and partitioning of a network into spanning trees. The proposed algorithms are either new or have lower complexity than previously published algorithms.Translated from Kibernetika, No. 2, pp. 67–75, March–April, 1991.  相似文献   

6.
The paper proposes a uniform approach to formalization of image processing, analysis, and recognition problems. The properties of a family of local algorithms are considered.Translated from Kibernetika i Sistemnyi Analiz, No. 4, pp. 3–13, July–August, 1992.  相似文献   

7.
A.E.  T.N. 《Neurocomputing》2009,72(13-15):3000
This article presents some efficient training algorithms, based on conjugate gradient optimization methods. In addition to the existing conjugate gradient training algorithms, we introduce Perry's conjugate gradient method as a training algorithm [A. Perry, A modified conjugate gradient algorithm, Operations Research 26 (1978) 26–43]. Perry's method has been proven to be a very efficient method in the context of unconstrained optimization, but it has never been used in MLP training. Furthermore, a new class of conjugate gradient (CG) methods is proposed, called self-scaled CG methods, which are derived from the principles of Hestenes–Stiefel, Fletcher–Reeves, Polak–Ribière and Perry's method. This class is based on the spectral scaling parameter introduced in [J. Barzilai, J.M. Borwein, Two point step size gradient methods, IMA Journal of Numerical Analysis 8 (1988) 141–148]. The spectral scaling parameter contains second order information without estimating the Hessian matrix. Furthermore, we incorporate to the CG training algorithms an efficient line search technique based on the Wolfe conditions and on safeguarded cubic interpolation [D.F. Shanno, K.H. Phua, Minimization of unconstrained multivariate functions, ACM Transactions on Mathematical Software 2 (1976) 87–94]. In addition, the initial learning rate parameter, fed to the line search technique, was automatically adapted at each iteration by a closed formula proposed in [D.F. Shanno, K.H. Phua, Minimization of unconstrained multivariate functions, ACM Transactions on Mathematical Software 2 (1976) 87–94; D.G. Sotiropoulos, A.E. Kostopoulos, T.N. Grapsa, A spectral version of Perry's conjugate gradient method for neural network training, in: D.T. Tsahalis (Ed.), Fourth GRACM Congress on Computational Mechanics, vol. 1, 2002, pp. 172–179]. Finally, an efficient restarting procedure was employed in order to further improve the effectiveness of the CG training algorithms. Experimental results show that, in general, the new class of methods can perform better with a much lower computational cost and better success performance.  相似文献   

8.
The growing need for high-performance embedded processors on the reconfigurable computing platform increases the pressure for developing design methods and tools. One important issue in mapping algorithms into hardware is the configuring of algorithms to fit the particular hardware structure, the available area and configuration, together with time parameters. This paper presents an overview of a new synthesis method—the Iso-plane method—on the polytope model of algorithm to increase the parallelism and facilitate the configurability in regular array design via algebraic transformations as associativity and commutativity. The paper presents a variety of new regular and scalable array solutions with improved performance and better layout including motherboards with daughter boards.  相似文献   

9.
We consider the analysis of specifications in the form of functional equations and synthesis of programs from such equational specifications. A systematic discussion of previously known and new methods for the solution of these analysis and synthesis problems is presented.Translated from Kibernetika, No. 1, pp. 19–29, January–February, 1989.  相似文献   

10.
We present new algorithms for determining optimal strategies for two-player games with proba- bilistic moves and reachability winning conditions. Such games, known as simple stochastic games, were extensively studied by A.Condon [Anne Condon. The complexity of stochastic games. Information and Computation, 96(2):203–224, 1992, Anne Condon. On algorithms for simple stochastic games. In Jin-Yi Cai, editor, Advances in Computational Complexity Theory, volume 13 of DIMACS Series in Discrete Mathematics and Theoretical Computer Science, pages 51–73. AMS, 1993]. Many interesting problems, including parity games and hence also mu-calculus model checking, can be reduced to simple stochastic games. It is an open problem, whether simple stochastic games can be solved in polynomial time. Our algorithms determine the optimal expected payoffs in the game. We use geometric interpre- tation of the search space as a subset of the hyper-cube [0,1]N. The main idea is to divide this set into convex subregions in which linear optimization methods can be used. We show how one can proceed from one subregion to the other so that, eventually, a region containing the optinal payoffs will be found. The total number of subregions is exponential in the size of the game but, in practice, the algorithms need to visit only few of them to find a solution. We believe that our new algorithms could provide new insights into the difficult problem of deter- mining algorithmic complexity of simple stochastic games and other, equivallent problems.  相似文献   

11.
Competitive randomized algorithms for nonuniform problems   总被引:5,自引:0,他引:5  
Competitive analysis is concerned with comparing the performance of on-line algorithms with that of optimal off-line algorithms. In some cases randomization can lead to algorithms with improved performance ratios on worst-case sequences. In this paper we present new randomized on-line algorithms for snoopy caching and the spin-block problem. These algorithms achieve competitive ratios approachinge/(e–1) 1.58 against an oblivious adversary. These ratios are optimal and are a surprising improvement over the best possible ratio in the deterministic case, which is 2. We also consider the situation when the request sequences for these problems are generated according to an unknown probability distribution. In this case we show that deterministic algorithms that adapt to the observed request statistics also have competitive factors approachinge/(e–1). Finally, we obtain randomized algorithms for the 2-server problem on a class of isosceles triangles. These algorithms are optimal against an oblivious adversary and have competitive ratios that approache/(e–1). This compares with the ratio of 3/2 that can be achieved on an equilateral triangle.Supported in part by the Center for Discrete Mathematics and Theoretical Computer Science (DIMACS), an NSF Science and Technology Center funded under NSF Contract STC-88-09648 and supported by the New Jersey Commission on Science and Technology.  相似文献   

12.
The broad availability of multi-core chips on standard desktop PCs provides strong motivation for the development of new algorithms for logic model checkers that can take advantage of the additional processing power. With a steady increase in the number of available processing cores, we would like the performance of a model checker to increase as well – ideally linearly. The new trend implies a change of focus away from cluster computers towards shared memory systems. In this paper we discuss the multi-core algorithms that are in development for the SPIN model checker.  相似文献   

13.
A method of space partition based on an equivalence relation is considered. Based on a lexicographic exhaustive search for equivalence classes, algorithms are developed for solution of a new class of optimization problems, namely, linear conditional Euclidean problems of lexicographic combinatorial optimization.Translated from Kibernetika i Sistemnyi Analiz, No. 5, pp. 155–125, September–October 2004.  相似文献   

14.
The properties of a new type of convergence of sequences of random variables are considered. Normalized convergence occupies an intermediate position between convergence in the mean and convergence in probability. It is relevant for studying the rate of convergence of stochastic iterative optimization algorithms and statistical methods of stochastic programming.Translated from Kibernetika i Sistemnyi Analiz, No. 3, pp. 84–92, May–June, 1992.  相似文献   

15.
Algorithms and programs for the normalization of polynomial Hamiltonians of classical mechanics by the Birkhoff–Gustavson and Deprit–Hori, as well as quasi-classical quantization procedures for normal forms, are compared. The algorithms and programs are represented in a universal pseudocode and implemented in the computer algebra systems REDUCE, MAPLE, and MATHEMATICA. Examples that illustrate the operation of these algorithms and programs for polynomial Hamiltonians of atomic systems in external electromagnetic fields are considered.  相似文献   

16.
Reduction Techniques for Instance-Based Learning Algorithms   总被引:19,自引:0,他引:19  
  相似文献   

17.
Conclusion After the burst of enthusiasm caused by the Khachiyan result [5], the methods of ellipsoids turned out to be the center of attention of many optimizers. After some time, only their theoretical value began to be achnowledged, raising doubts about their practical efficiency. And this is valid to an extent if we speak of the MCGM method and its negligible modifications.In this paper the authors have attempted to show that ellipsoid methods are just a very particular case of the families of gradient type algorithms using a space stretching operation. Other representatives of this family, as r-algorithms, say, are an effective practical means for solving many complex mathematical programming problems reducing to nondifferentiable optimization. The theory of a whole class of algorithms with space stretching is still far from completion. It seems to us a sufficiently realistic aim to construct, such an algorithm, which would be no less practically efficient than the r-algorithm and would have as good a foundation as the method of ellipsoids.Translated from Kibernetika, No. 5, pp. 61–69, September–October, 1982.  相似文献   

18.
The problem of efficient use and storage of data in memory is considered. Three compression algorithms for ordered sequences of records are proposed, which are essentially faster than the statistical algorithms. The running time of the proposed algorithms is linear in the number of records.Translated from Kibernetika i Sistemnyi Analiz, No. 6, pp. 121–128, November–December, 1991.  相似文献   

19.
The training time of ANN depends on size of ANN (i.e. number of hidden layers and number of neurons in each layer), size of training data, their normalization range and type of mapping of training patterns (like X–Y, X–Y, X–Y and X–Y), error functions and learning algorithms. The efforts have been done in past to reduce training time of ANN by selection of an optimal network and modification in learning algorithms. In this paper, an attempt has been made to develop a new neuron model using neuro-fuzzy approach to overcome the problems of ANN incorporating the features of fuzzy systems at a neuron level. Fuzzifying the neuron structure, which incorporates the features of simple neuron as well as high order neuron, has used this synergetic approach.  相似文献   

20.
A Hybrid Big Bang–Big Crunch (HBB–BC) optimization algorithm is employed for optimal design of truss structures. HBB–BC is compared to Big Bang–Big Crunch (BB–BC) method and other optimization methods including Genetic Algorithm, Ant Colony Optimization, Particle Swarm Optimization and Harmony Search. Numerical results demonstrate the efficiency and robustness of the HBB–BC method compared to other heuristic algorithms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号