首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
一种非比较分段排序算法的研究   总被引:4,自引:2,他引:4  
非比较分段排序(简称NCSS)算法是建立在模仿人类思想方式基础上的一种非比较排序算法,算法分析和实验结果都表明:NCSS算法的时间复杂度和待排序数据分布无关,为O(N),而附加存储空间极小,排序速率明显优于QuickSort,ProportionSplitSort,分段快速排序等算法,NCSS算法特别适合于数据最大的场合。  相似文献   

2.
An algorithm, which asymptotically halves the number of comparisons made by the common Heapsort, is presented and analysed in the worst case. The number of comparisons is shown to be (n+1)(log(n+1)+log log(n+1)+1.82)+O(log n) in the worst case to sort n elements, without using any extra space. Quicksort, which usually is referred to as the fastest in-place sorting method, uses 1.38n log n − O(n) in the average case (see Gonnet (1984)).  相似文献   

3.
The last decade has seen a surge of research activity on multiobjective optimization using evolutionary computation and a number of well performing algorithms have been published. The majority of these algorithms use fitness assignment based on Pareto-domination: Nondominated sorting, dominance counting, or identification of the nondominated solutions. The success of these algorithms indicates that this type of fitness is suitable for multiobjective problems, but so far the use of Pareto-based fitness has lead to program run times in O(GMN/sup 2/), where G is the number of generations, M is the number of objectives, and N is the population size. The N/sup 2/ factor should be reduced if possible, since it leads to long processing times for large population sizes. This paper presents a new and efficient algorithm for nondominated sorting, which can speed up the processing time of some multiobjective evolutionary algorithms (MOEAs) substantially. The new algorithm is incorporated into the nondominated sorting genetic algorithm II (NSGA-II) and reduces the overall run-time complexity of this algorithm to O(GN log/sup M-1/N), much faster than the O(GMN/sup 2/) complexity published by Deb et al. (2002). Experiments demonstrate that the improved version of the algorithm is indeed much faster than the previous one. The paper also points out that multiobjective EAs using fitness based on dominance counting and identification of nondominated solutions can be improved significantly in terms of running time by using efficient algorithms known from computer science instead of inefficient O(MN/sup 2/) algorithms.  相似文献   

4.
This paper deals with the Jordan sorting problem: Given n intersection points of a Jordan curve with the x-axis in the order in which they occur along the curve, sort these points into the order in which they occur along the x-axis. The worst-case time complexity of this problem is θ(n). Unfortunately, the known O(n) time algorithms are too complicated, which causes that they are difficult to implement and slow for the inputs of sizes that are of practical interest. In this paper, two algorithms for Jordan sorting are presented. The first algorithm is extremely simple. Although its worst-case time complexity is O(nlogn), it is shown that the worst time is achieved only for special inputs. For most inputs, a better performance can be expected. Furthermore, an improved O(nlog logn) worst-case time algorithm is presented. For the input sequences of size from 4 to 105, the algorithms are compared with Quicksort, with the algorithm based on splay trees and with the O(n) time algorithm proposed by Fung et al. The results show that our algorithms are faster. The relevant implementation details are given.  相似文献   

5.
DAVID R. MUSSER 《Software》1997,27(8):983-993
Quicksort is the preferred in-place sorting algorithm in many contexts, since its average computing time on uniformly distributed inputs is Θ(N log N), and it is in fact faster than most other sorting algorithms on most inputs. Its drawback is that its worst-case time bound is Θ(N2). Previous attempts to protect against the worst case by improving the way quicksort chooses pivot elements for partitioning have increased the average computing time too much – one might as well use heapsort, which has a Θ(N log N) worst-case time bound, but is on the average 2–5 times slower than quicksort. A similar dilemma exists with selection algorithms (for finding the i-th largest element) based on partitioning. This paper describes a simple solution to this dilemma: limit the depth of partitioning, and for subproblems that exceed the limit switch to another algorithm with a better worst-case bound. Using heapsort as the ‘stopper’ yields a sorting algorithm that is just as fast as quicksort in the average case, but also has an Θ(N log N) worst case time bound. For selection, a hybrid of Hoare's FIND algorithm, which is linear on average but quadratic in the worst case, and the Blum–Floyd–Pratt–Rivest–Tarjan algorithm is as fast as Hoare's algorithm in practice, yet has a linear worst-case time bound. Also discussed are issues of implementing the new algorithms as generic algorithms, and accurately measuring their performance in the framework of the C+:+ Standard Template Library. ©1997 by John Wiley & Sons, Ltd.  相似文献   

6.
R. Geoff Dromey 《Software》1984,14(6):509-518
The widely known Quicksort algorithm does not attempt to actively take advantage of partial order in sorting data. A simple change can be made to the Quicksort strategy to give a bestcase performance of n, for ordered data, with a smooth transition to O(n log n) for random data. This algorithm (Transort) matches the performance of Sedgewick's claimed best implementation of Quicksort for random data.  相似文献   

7.
This paper is concerned with an external sorting algorithm with no additional disk space. The proposed algorithm is a hybrid one that uses Quicksort and special merging process in two distinct phases. The algorithm excels in sorting a huge file, which is many times larger than the available memory of the computer. This algorithm creates no extra backup file for manipulating huge records. For this, the algorithm saves huge disk space, which is needed to hold the large file. Also our algorithm switches to special merging process after the first phase that uses Quicksort. This reduces the time complexity and makes the algorithm faster.  相似文献   

8.
Adaptivity in sorting algorithms is sometimes gained at the expense of practicality. We give experimental results showing that Splaysort — sorting by repeated insertion into a Splay tree — is a surprisingly efficient method for in-memory sorting. Splaysort appears to be adaptive with respect to all accepted measures of presortedness, and it outperforms Quicksort for sequences with modest amounts of existing order. Although Splaysort has a linear space overhead, there are many applications for which this is reasonable. In these situations Splaysort is an attractive alternative to traditional comparison-based sorting algorithms such as Heapsort, Mergesort, and Quicksort.  相似文献   

9.
External sorting: run formation revisited   总被引:1,自引:0,他引:1  
External mergesort begins with a run formation phase creating the initial sorted runs. Run formation can be done by a load-sort-store algorithm or by replacement selection. A load-sort-store algorithm repeatedly fills available memory with input records, sorts them, and writes the result to a run file. Replacement selection produces longer runs than load-sort-store algorithms and completely overlaps sorting and I/O, but it has poor locality of reference resulting in frequent cache misses and the classical algorithm works only for fixed-length records. This paper introduces batched replacement selection: a cache-conscious version of replacement selection that works also for variable-length records. The new algorithm resembles AlphaSort in the sense that it creates small in-memory runs and merges them to form the output runs. Its performance is experimentally compared with three other run formation algorithms: classical replacement selection, Quicksort, and AlphaSort. The experiments show that batched replacement selection is considerably faster than classic replacement selection. For small records (average 100 bytes), CPU time was reduced by about 50 percent and elapsed time by 47-63 percent. It was also consistently faster than Quicksort, but it did not always outperform AlphaSort. Replacement selection produces fewer runs than Quicksort and AlphaSort. The experiments confirmed that this reduces the merge time whereas the effect on the overall sort time depends on the number of disks available.  相似文献   

10.
With the popularity of parallel database machines based on the shared-nothing architecture, it has become important to find external sorting algorithms which lead to a load-balanced computation, i.e., balanced execution, communication and output. If during the course of the sorting algorithm each processor is equally loaded, parallelism is fully exploited. Similarly, balanced communication will not congest the network traffic. Since sorting can be used to support a number of other relational operations (joins, duplicate elimination, building indexes etc.) data skew produced by sorting can further lead to execution skew at later stages of these operations. In this paper we present a load-balanced parallel sorting algorithm for shared-nothing architectures. It is a multiple-input multiple-output algorithm with four stages, based on a generalization of Batcher's odd-even merge. At each stage then keys are evenly distributed among thep processors (i.e., there is no final sequential merge phase) and the distribution of keys between stages ensures against network congestion. There is no assumption made on the key distribution and the algorithm performs equally well in the presence of duplicate keys. Hence our approach always guarantees its performance, as long asn is greater thanp 3, which is the case of interest for sorting large relations. In addition, processors can be added incrementally. Recommended by: Patrick Valduriez  相似文献   

11.
In this paper, we proposed a new efficient sorting algorithm based on insertion sort concept. The proposed algorithm is called Bidirectional Conditional Insertion Sort (BCIS). It is in-place sorting algorithm and it has remarkably efficient average case time complexity when compared with classical insertion sort (IS). By comparing our new proposed algorithm with the Quicksort algorithm, BCIS indicated faster average case time for relatively small size arrays up to 1500 elements. Furthermore, BCIS was observed to be faster than Quicksort within high rate of duplicated elements even for large size array.  相似文献   

12.
针对在数据量动态增加的场景下现有的排序算法管理数据导致算法性能大大降低的问题,提出一种16-bit Trie树排序算法.借助邻居节点上存储的链节点指针完成排序,它不仅可以边构建边排序,且引入动态数组可以提高该算法的空间效率.仿真结果表明,传统Trie树支持数据动态更新,但通过遍历Trie树的方式完成排序耗时较多,快速排...  相似文献   

13.
单向链表快速排序算法   总被引:2,自引:0,他引:2  
单向链表广泛应用于动态存储结构,当前单向链表的排序算法普遍效率偏低,而平均效率最高的快速排序算法并不适用于单向链表。基于分治策略,使用递归方法,通过重新链接单向链表节点,提出了用于单向链表的快速排序算法,其平均时间复杂度为O(nlog2n),辅助空间复杂度为O(0),平均递归栈空间复杂度为O(log2n);同时,进行了算法分析和实验测试,其效率较其它单向链表排序算法有较大提高,且较传统基于线性表的快速排序算法也有一定提高。研究结果解决了当前单向链表排序效率较低的问题。  相似文献   

14.
The self-organizing Maps (SOM) introduced by Kohonen implement two important operations: vector quantization (VQ) and a topology-preserving mapping. In this paper, an online self-organizing topological tree (SOTT) with faster learning is proposed. A new learning rule delivers the efficiency and topology preservation, which is superior of other structures of SOMs. The computational complexity of the proposed SOTT is O(log N) rather than O(N) as for the basic SOM. The experimental results demonstrate that the reconstruction performance of SOTT is comparable to the full-search SOM and its computation time is much shorter than the full-search SOM and other vector quantizers. In addition, SOTT delivers the hierarchical mapping of codevectors and the progressive transmission and decoding property, which are rarely supported by other vector quantizers at the same time. To circumvent the shortcomings of clustering performance of classical partition clustering algorithms, a hybrid clustering algorithm that fully exploit the online learning and multiresolution characteristics of SOTT is devised. A new linkage metric is proposed which can be updated online to accelerate the time consuming agglomerative hierarchical clustering stage. Besides the enhanced clustering performance, due to the online learning capability, the memory requirement of the proposed SOTT hybrid clustering algorithm is independent of the size of the data set, making it attractive for large database.  相似文献   

15.
归并方式的多线程快速排序算法   总被引:1,自引:0,他引:1  
本文基于Java平台针对经典快速排序提出改进方案,使用归并的思想对快速排序作了多线程优化,并对单、多线程下的快速排序进行了对比测试和分析。结果表明,通过多线程优化,快速排序在双核主机上对5千万个随机整型数据进行排序的速度是单线程的1.6倍,说明了该优化方法的有效性。该方法思路直观、容易理解,宜作为多核技术教学案例。  相似文献   

16.
We present four polylog-time parallel algorithms for matching parentheses on an exclusive-read and exclusive-write (EREW) parallel random-access machine (PRAM) model. These algorithms provide new insights into the parentheses-matching problem. The first algorithm has a time complexity of O(log2 n) employing O(n/(log n)) processors for an input string containing n parentheses. Although this algorithm is not cost-optimal, it is extremely simple to implement. The remaining three algorithms, which are based on a different approach, achieve O(log n) time complexity in each case, and represent successive improvements. The second algorithm requires O(n) processors and working space, and it is comparable to the first algorithm in its ease of implementation. The third algorithm uses O(n/(log n)) processors and O(n log n) space. Thus, it is cost-optimal, but uses extra space compared to the standard stack-based sequential algorithm. The last algorithm reduces the space complexity to O(n) while maintaining the same processor and time complexities. Compared to other existing time-optimal algorithms for the parentheses-matching problem that either employ extensive pipelining or use linked lists and comparable data structures, and employ sorting or a linked list ranking algorithm as subroutines, the last two algorithms have two distinct advantages. First, these algorithms employ arrays as their basic data structures, and second, they do not use any pipelining, sorting, or linked list ranking algorithms  相似文献   

17.
Many routing problems in parallel processing, such as concentration and permutation problems, can be cast as sorting problems. In this paper, we consider the problem of sorting on a new model, called an adaptive sorting network. We show that any sequence of n bits can be sorted on this model in O(lg2 n) bit-level delay using O(n) constant fanin gates. This improves the cost complexity of K.E. Batcher's binary sorters (1968) by a factor of O(lg2 n) while matching their sorting time. The only other network that can sort binary sequences in O(n) cost is the network version of columnsort algorithm, but this requires excessive pipelining. In addition, using binary sorters, we construct permutation networks with O(n lg n) bit-level cost and O(lg3 n) bit-level delay. These results provide the asymptotically least-cost practical concentrators and permutation networks to date. We note, of course, that the well-known AKS sorting network has O(lg n) sorting time and O(n lg n) cost, but the constants hidden in these complexities are so large that our complexities outperform those of the AKS sorting network until n becomes extremely large  相似文献   

18.
排序算法是计算机程序设计广泛使用的解决问题的方法.研究排序算法具有重要的理论意义和广泛的应用价值。论述几种常用的内部排序算法,从时间复杂度、空间复杂度及稳定性方面对这些算法进行了比较分析,提出文献中出现的两种冒泡算法版本商榷之处,以供在不同条件下选择适合的排序算法借鉴。并分别提供实现各种算法的c++源代码。  相似文献   

19.
We consider the problem of merging two sorted sequences on constant degree networks performing compare—exchange operations only. The classical solution to this problem is given by the networks based on Batcher's Odd—Even Merge and Bitonic Merge running in log(2n ) time. Due to the obvious log n lower bound for the runtime, this is time-optimal. We present a new family of merging networks working in a completely different way than the previously known algorithms. They have a novel property of being periodic: this means that for some (small) constant k , each processing unit of the network performs the same operations at steps t and t+k (as long as t+k does not exceed the runtime). The only operations executed are compare—exchange operations, just like in the case of Batcher's networks. The architecture of the networks is very simple and easy to lay out. We show that even for period 3 there is a network in our family merging two n -element sorted sequences in time O(log n ). Since each network of period 2 requires steps to merge such sequences, 3 is the smallest period for which we may achieve a fast runtime. In order to improve constants standing in front of log n we increment the period and tune the construction using additional techniques. We achieve the runtime 9 . . . log_3 n 5.7 . . . log n for a network of period 4, and 2.25 . . . ((k+3)/(k-1+log 3))log n 2.25 . . . log n for a network of period k+3 , for . Due to the periodic design, our networks have small area complexity. For instance, if each processing unit requires O(1) area and a comparator uses a single wire of width O(1) connecting the processing elements, then our networks require area. This compares well with Batcher's networks that require area . Received February 1997, and in revised form September 1997, and in final form February 1998.  相似文献   

20.
Sorting huge amounts of datasets have become essential in many computer applications, such as search engines, database and web-based applications, in order to improve searching performance. Moreover, due to the witnessed prevalence of the commercial Simultaneous Multithreaded architecture (SMT), parallel programming using multithreading becomes a dire need for efficiently using all available hardware resources for one application. In this paper, one of the efficient and quick algorithms, the Quicksort, is applied as a parallel multithreaded algorithm on SMT architecture, where virtual parallelization has been achieved using the POSIX threads (Pthreads) library. The proposed algorithm is evaluated and compared with its sequential counterpart. The obtained analytical and experimental results reveal that multithreading is a viable technique for implementing the parallel Quicksort algorithm efficiently on SMT architecture, where it has been shown both analytically and experimentally that the parallel multithreaded Quicksort algorithm outperforms the sequential Quicksort algorithm in terms of various performance metrics including; time complexity and speedup.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号