首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 406 毫秒
1.
The Euclidean distance transform (EDT) is an operation to convert a binary image consisting of black and white pixels to a representation where each pixel has the Euclidean distance of the nearest black pixel. The EDT has many applications in computer vision and image processing. In this paper, we present a constant-time algorithm for computing the EDT of an N×N image on a reconfigurable mesh. Our algorithm has two variants. (i) If the image is initially given in an N×N mesh, one pixel per processor, our algorithm requires an N×N×N mesh for computing the EDT. (ii) If the image is given in an N×N2 mesh, each row of the image in the first row of a separate N×N mesh, we can compute the EDT in the same N×N2 mesh. The AT2 bounds for these two variants are O(N4) and O(N3) respectively. The best previously known algorithm (Y. Pan and K. Li, Inform. Sci.120 (1999), 209–221) for this problem assumes input similar to the second variant of our algorithm and runs in constant-time on an N2×N2 reconfigurable mesh with an AT2 bound of O(N4). Hence both variants of our algorithm improve upon the processor complexity of the algorithm in Pan and Li (1999) by a factor of N and the second variant improves upon the AT2 complexity by a factor of N.  相似文献   

2.
An efficient algorithm for determining the linear complexity and the minimal polynomial of a binary sequence with period 2 n p m is proposed and proved, where 2 is a primitive root modulop 2. The new algorithm generalizes the algorithm for computing the linear complexity of a binary sequence with period 2 n and the algorithm for computing the linear complexity of a binary sequence with periodp n , where 2 is a primitive root modulop 2.  相似文献   

3.
Legendre orthogonal moments have been widely used in the field of image analysis. Because their computation by a direct method is very time expensive, recent efforts have been devoted to the reduction of computational complexity. Nevertheless, the existing algorithms are mainly focused on binary images. We propose here a new fast method for computing the Legendre moments, which is not only suitable for binary images but also for grey level images. We first establish a recurrence formula of one-dimensional (1D) Legendre moments by using the recursive property of Legendre polynomials. As a result, the 1D Legendre moments of order p, Lp=Lp(0), can be expressed as a linear combination of Lp-1(1) and Lp-2(0). Based on this relationship, the 1D Legendre moments Lp(0) can thus be obtained from the arrays of L1(a) and L0(a), where a is an integer number less than p. To further decrease the computation complexity, an algorithm, in which no multiplication is required, is used to compute these quantities. The method is then extended to the calculation of the two-dimensional Legendre moments Lpq. We show that the proposed method is more efficient than the direct method.  相似文献   

4.
This paper considers the histogramming problem on hypercube.N-PE hypercube is used to process anN 12 × N12digitized image in which each pixel has a gray-level value between 0 andM − 1. In general,M, the range of gray-level values is much smaller thanN, the number of pixels being processed. Our algorithm generates the histogram of the image inO(logM * logN) time using radix sort and efficient data movement operations. This technique can be implemented on butterfly, shuffle-exchange and fat pyramid organizations.  相似文献   

5.
目的 针对当前大多数数字图像加密算法多采用单一的混沌系统,且置乱方法基本只采用像素行列互换、Arnold变换、Baker变换、序列排序构造替换表等几类,提出一种新的整合神经网络置乱图像的动态自反馈混沌系统图像加密算法。方法 该算法通过1维Logistic混沌、chebyshev混沌和自定义mx)运算构造了一种动态自反馈混沌系统,通过频数检测、序列分布图、平衡度分析、相关性分析、Lyapunov指数验证了系统的随机性,并对其序列进行了均匀化处理,通过序列均匀性证明、序列分布图、序列期望和方差验证了均匀化效果。该算法从混沌序列中随机选取输入值和参数输入神经网络,采用每组神经网络输出值构造置乱矩阵进行初次全局置乱,再从bit位进行二次置乱;采用两组与明文相关的秘钥序列进行像素值替代扩散,使得明文到密文经过中间密文变化,增强了算法的安全性。结果 通过计算机仿真和性能分析表明该加密算法体现了良好的密码学特征,从秘钥空间、秘钥敏感性、统计分析、信息熵、差分分析、相邻像素相关性分析各方面验证了其安全性,数据表明该算法秘钥空间达到了2216,信息熵为7.998 3,水平、垂直、对角方向相邻像素相关系数分别为-0.000 381、0.000 607、-0.000 309,NPCR值介于(0.995~80.996 6)之间,UACI值介于(0.333~0.338)之间。结论 该算法可以实现良好的加密效果,在数据对比上优于超混沌系统图像加密、像素位置和bit位双重置乱加密等,可以被广泛应用在灰度图像加密中乃至扩展到彩色图像加密中,能够起到图像信息在网络传输、存储中的隐私保护作用。  相似文献   

6.
Recently, S. Müller developed a generalized Atkin algorithm for computing square roots, which requires two exponentiations in finite fields GF(q) when . In this paper, we present a simple improvement to it and the improved algorithm requires only one exponentiation for half of squares in finite fields GF(q) when . Furthermore, in finite fields GF(pm), where and m is odd, we reduce the complexity of the algorithm from O(m3log3p) to O(m2log2p(logm+logp)) using the Frobenius map and normal basis representation.  相似文献   

7.
Thedistance transform(DT) is an image computation tool which can be used to extract the information about the shape and the position of the foreground pixels relative to each other. It converts a binary image into a grey-level image, where each pixel has a value corresponding to the distance to the nearest foreground pixel. The time complexity for computing the distance transform is fully dependent on the different distance metrics. Especially, the more exact the distance transform is, the worse execution time reached will be. Nowadays, quite often thousands of images are processed in a limited time. It seems quite impossible for a sequential computer to do such a computation for the distance transform in real time. In order to provide efficient distance transform computation, it is considerably desirable to develop a parallel algorithm for this operation. In this paper, based on the diagonal propagation approach, we first provide anO(N2) time sequential algorithm to compute thechessboard distance transform(CDT) of anN×Nimage, which is a DT using the chessboard distance metrics. Based on the proposed sequential algorithm, the CDT of a 2D binary image array of sizeN×Ncan be computed inO(logN) time on the EREW PRAM model usingO(N2/logN) processors,O(log logN) time on the CRCW PRAM model usingO(N2/log logN) processors, andO(logN) time on the hypercube computer usingO(N2/logN) processors. Following the mapping as proposed by Lee and Horng, the algorithm for the medial axis transform is also efficiently derived. The medial axis transform of a 2D binary image array of sizeN×Ncan be computed inO(logN) time on the EREW PRAM model usingO(N2/logN) processors,O(log logN) time on the CRCW PRAM model usingO(N2/log logN) processors, andO(logN) time on the hypercube computer usingO(N2/logN) processors. The proposed parallel algorithms are composed of a set of prefix operations. In each prefix operation phase, only increase (add-one) operation and minimum operation are employed. So, the algorithms are especially efficient in practical applications.  相似文献   

8.
We introduce a new algorithm for computing an approximately optimal binary search tree with known access probabilities or weights on items. The algorithm is simple to implement and it has two contributions. First, a randomized variant of the algorithm produces a binary search tree with expected performance that improves the previous theoretical guarantees (the performance is dependent on the value of the input random variable). More precisely, if p is the probability of accessing an item, then under expectation the item is found after searching lg1/p+0.087+lg2(1+pmax) nodes, where pmax is the maximal probability among items. The previous best bound was lg1/p+1, albeit deterministic. For the optimal tree our upper bound implies a non-constructive performance bound of H+0.087+lg2(1+pmax), where H is the entropy on the item distribution and the previous bound was H+1. The second contribution of the algorithm is a low cost in i/o models of cost such as the cache-oblivious model, while attaining simultaneously the above bound for the produced tree.  相似文献   

9.
Osteoporotic fractures are a major concern for the health care of elderly and female populations. Early diagnosis of patients with a high risk of osteoporotic fractures can be enhanced by introducing second-order statistical analysis of bone image data using techniques such as variogram analysis. Such analysis is computationally intensive thereby creating an impediment for introduction into imaging machines found in common clinical settings. This paper investigates the fast implementation of the semivariogram algorithm, which has been proven to be effective in modeling bone strength, and should be of interest to readers in the areas of computer-aided diagnosis and quantitative image analysis. The semivariogram is a statistical measure of the spatial distribution of data, and is based on Markov random fields. Semivariogram analysis is a computationally intensive algorithm that has typically seen applications in the geosciences and remote sensing areas. Recently, applications in the area of medical imaging have been investigated, resulting in the need for efficient real-time implementation of the algorithm. A semivariance, γ(h), is defined as the half of the expected squared differences of pixel values between any two data locations with a lag distance of h. Due to the need to examine each pair of pixels in the image or sub-image being processed, the base algorithm complexity for an image window with n pixels is O(n 2). Field-programmable gate arrays (FPGAs) are an attractive solution for such demanding applications due to their parallel processing capability. FPGAs also tend to operate at relatively modest clock rates measured in a few hundreds of megahertz. This paper presents a technique for the fast computation of the semivariogram using two custom FPGA architectures. A modular architecture approach is chosen to allow for replication of processing units. This allows for high throughput due to concurrent processing of pixel pairs. The current implementation is focused on isotropic semivariogram computations only. The algorithm is benchmarked using VHDL on a Xilinx XUPV5-LX110T development Kit, which utilizes the Virtex5 FPGA. Medical image data from DXA scans are utilized for the experiments. Implementation results show that a significant advantage in computational speed is attained by the architectures with respect to implementation on a personal computer with an Intel i7 multi-core processor.  相似文献   

10.
A binary tanglegram is a drawing of a pair of rooted binary trees whose leaf sets are in one-to-one correspondence; matching leaves are connected by inter-tree edges. For applications, for example, in phylogenetics, it is essential that both trees are drawn without edge crossings and that the inter-tree edges have as few crossings as possible. It is known that finding a tanglegram with the minimum number of crossings is NP-hard and that the problem is fixed-parameter tractable with respect to that number. We prove that under the Unique Games Conjecture there is no constant-factor approximation for binary trees. We show that the problem is NP-hard even if both trees are complete binary trees. For this case we give an O(n 3)-time 2-approximation and a new, simple fixed-parameter algorithm. We show that the maximization version of the dual problem for binary trees can be reduced to a version of MaxCut for which the algorithm of Goemans and Williamson yields a 0.878-approximation.  相似文献   

11.
Removing noise in a given binary image is a common operation. A generalization of the operation is to erase an arbitrarily specified component by reversing pixel values in the component. This paper shows that this operation can be done without using any data structure like a stack or queue, or more exactly using only constant extra memory (consisting of a constant number of words of O(log n) bits for an image of n pixels) in O(mlog m) time for a component consisting of m pixels. This is an in-place algorithm, but the image matrix cannot be used as work space since it has just one bit for each pixel. Whenever we flip a pixel value in a target component, the component shape is also deformed, which causes some difficulty. The main idea for our constant work space algorithm is to deform a component so that its connectivity is preserved.  相似文献   

12.
We introduce a GPU-based parallel vertex substitution (pVS) algorithm for the p-median problem using the CUDA architecture by NVIDIA. pVS is developed based on the best profit search algorithm, an implementation of vertex substitution (VS), that is shown to produce reliable solutions for p-median problems. In our approach, each candidate solution in the entire search space is allocated to a separate thread, rather than dividing the search space into parallel subsets. This strategy maximizes the usage of GPU parallel architecture and results in a significant speedup and robust solution quality. Computationally, pVS reduces the worst case complexity from sequential VS’s O(p · n2) to O(p · (n ? p)) on each thread by parallelizing computational tasks on GPU implementation. We tested the performance of pVS on two sets of numerous test cases (including 40 network instances from OR-lib) and compared the results against a CPU-based sequential VS implementation. Our results show that pVS achieved a speed gain ranging from 10 to 57 times over the traditional VS in all test network instances.  相似文献   

13.
An L(2,1)-labeling of a graph G is an assignment f from the vertex set V(G) to the set of nonnegative integers such that |f(x)?f(y)|≥2 if x and y are adjacent and |f(x)?f(y)|≥1 if x and y are at distance 2, for all x and y in V(G). A k-L(2,1)-labeling is an L(2,1)-labeling f:V(G)→{0,…,k}, and the L(2,1)-labeling problem asks the minimum k, which we denote by λ(G), among all possible assignments. It is known that this problem is NP-hard even for graphs of treewidth 2, and tree is one of very few classes for which the problem is polynomially solvable. The running time of the best known algorithm for trees had been O(Δ 4.5 n) for more than a decade, and an O(min{n 1.75,Δ 1.5 n})-time algorithm has appeared recently, where Δ and n are the maximum degree and the number of vertices of an input tree, however, it has been open if it is solvable in linear time. In this paper, we finally settle this problem by establishing a linear time algorithm for L(2,1)-labeling of trees. Furthermore, we show that it can be extended to a linear time algorithm for L(p,1)-labeling with a constant p.  相似文献   

14.
This paper introduces the kissing problem: given a rectangular room with n people in it, what is the most efficient way for each pair of people to kiss each other goodbye? The room is viewed as a set of pixels that form a subset of the integer grid. At most one person can stand on a pixel at once, and people move horizontally or vertically. In order to move into a pixel in time step t, the pixel must be empty in time step t?1. The paper gives one algorithm for kissing everyone goodbye.
  1. This algorithm is a 4+o(1)-approximation algorithm in a crowded room (e.g., only one unoccupied pixel).
  2. It is a 45+o(1)-approximation algorithm for kissing in a comfortable room (e.g., at most half the pixels are empty).
  3. It is a 25+o(1)-approximation for kissing in a sparse room (more than half the pixels are empty) with two people abutting the far walls of the room.
This paper gives optimal solutions for small cases, which were found using a heuristic state space search (IDA*).  相似文献   

15.
Given a conjunctive predicate ? over a distributed execution, this paper gives an algorithm to detect all interval sets, each interval set containing one interval per process, in which the local values satisfy the Definitely(?) modality. The time complexity of the algorithm is O(n3p), where n is the number of processes and p is the bound on the number of times a local predicate becomes true at any process. The paper also proves that unlike the Possibly(?) modality which admits O(pn) solution interval sets, the Definitely(?) modality admits O(np) solution interval sets. The paper also gives an on-line test to determine whether all solution interval sets can be detected in polynomial time under arbitrary fine-grained causality-based modality specifications.  相似文献   

16.
This paper introduces an intriguing topic in image processing where accuracy of images even in details is important and adopts an intriguing methodology dealing with discrete topics by continuous mathematics and numerical approximation. The key idea is that a pixel of images at different levels can be quantified by a greyness value, which can then be regarded as the mean of an integral of continuous functions over a certain region, and evaluated by numerical integration approximately. However, contrasted to the traditional integration, the integrand has different smooth nature in different subregions due to piecewise interpolation and approximation. New treatments of approximate integration and new discrete algorithms of images have been developed.The cycle conversion T−1T of image transformations is said if an image is distorted by a transformation T and then restored back to itself by the inverse transformation T−1. Combined algorithms of discrete techniques proposed in [1–3] only have the convergence rates O(1/N) and O(1/N2) of sequential greyness errors, where N is the division number for a pixel split into N2 subpixels. This paper reports new combination algorithms using spline functions to achieve the high convergence rates O(1/N3) and O(1/N4) of digital image transformations under the cycle conversion. Both error analysis and numerical experiments have been provided to verify the high convergence rates. High convergence rates of discrete algorithms are important in saving CPU time, particularly to multi-greyness images. Moreover, the computational figures for real images of 256 × 256 with 256 greyness levels, in which N = 2 is good enough for practical requirements, display validity, and effectiveness of the new algorithms in this paper.  相似文献   

17.
This paper formulates pixel labelling as a series of two-category classification. Unlike existing techniques, which assign a determinate label to each pixel, we assign a label set to each pixel and shrink the label set step by step. Determinate labelling is achieved within log2n (n is size of label set) steps. In each step, we bisect the label set into two subsets and discard the one with higher cost of assigning it to the pixel. Simultaneous labelling of an image is carried out by minimizing an energy function that can be minimized via graph cut algorithm. Based on the bisection approach, we propose a bitwise algorithm for pixel labelling, which set one bit of each pixel's label in each step. We apply the proposed algorithm to stereo matching and image restoration. Experimental results demonstrate that both good performance and high efficiency are achieved.  相似文献   

18.
We present a randomized parallel list ranking algorithm for distributed memory multiprocessors, using a BSP type model. We first describe a simple version which requires, with high probability, log(3p)+log ln(n)=Õ(logp+log logn) communication rounds (h-relations withh=Õ(n/p)) andÕ(n/p)) local computation. We then outline an improved version that requires high probability, onlyr?(4k+6) log(2/3p)+8=Õ(k logp) communication rounds wherek=min{i?0 |ln(i+1)n?(2/3p)2i+1}. Notekn) is an extremely small number. Forn andp?4, the value ofk is at most 2. Hence, for a given number of processors,p, the number of communication rounds required is, for all practical purposes, independent ofn. Forn?1, 500,000 and 4?p?2048, the number of communication rounds in our algorithm is bounded, with high probability, by 78, but the actual number of communication rounds observed so far is 25 in the worst case. Forn?10010100 and 4?p?2048, the number of communication rounds in our algorithm is bounded, with high probability, by 118; and we conjecture that the actual number of communication rounds required will not exceed 50. Our algorithm has a considerably smaller member of communication rounds than the list ranking algorithm used in Reid-Miller’s empirical study of parallel list ranking on the Cray C-90.(1) To our knowledge, Reid-Miller’s algorithm(1) was the fastest list ranking implementation so far. Therefore, we expect that our result will have considerable practical relevance.  相似文献   

19.
A k-out-of-n visual secret sharing scheme (VSSS) resolves the visual variant of the k-out-of-n secret sharing problem where only k or more out of n participants can reveal the secret by human visual system without any cryptographic computation. The best pixel expansion of the general k-out-of-n VSSS for c-colored images was c×m by Yang and Laih [New colored visual secret sharing schemes, Des Codes Cryptogr. 24 (2000) 325-335] where m is the pixel expansion of an existing binary k-out-of-n VSSS. Regarding the c-colored n-out-of-n scheme, the best pixel expansion is (c-1)2n-1-c+2 and c(c-1)2n-2-c when n is odd and even, respectively, by Blundo et al. [Improved schemes for visual cryptography, Des Codes Cryptogr. 24 (2001) 255-278]. In this paper, we propose a new c-colored k-out-of-n VSSS by using a pixel expansion of that is more efficient than ever.  相似文献   

20.
A list decoding algorithm is designed for the first-order binary Reed-Muller codes of length n that reconstructs all codewords located within the ball of radius n/2(1 ? ?) about the received vector and has the complexity of O(n ln2(min{? ?2, n})) binary operations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号