首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 968 毫秒
1.
Detecting edges in images from a finite sampling of Fourier data is important in a variety of applications. For example, internal edge information can be used to identify tissue boundaries of the brain in a magnetic resonance imaging (MRI) scan, which is an essential part of clinical diagnosis. Likewise, it can also be used to identify targets from synthetic aperture radar data. Edge information is also critical in determining regions of smoothness so that high resolution reconstruction algorithms, i.e. those that do not “smear over” the internal boundaries of an image, can be applied. In some applications, such as MRI, the sampling patterns may be designed to oversample the low frequency while more sparsely sampling the high frequency modes. This type of non-uniform sampling creates additional difficulties in processing the image. In particular, there is no fast reconstruction algorithm, since the FFT is not applicable. However, interpolating such highly non-uniform Fourier data to the uniform coefficients (so that the FFT can be employed) may introduce large errors in the high frequency modes, which is especially problematic for edge detection. Convolutional gridding, also referred to as the non-uniform FFT, is a forward method that uses a convolution process to obtain uniform Fourier data so that the FFT can be directly applied to recover the underlying image. Carefully chosen parameters ensure that the algorithm retains accuracy in the high frequency coefficients. Similarly, the convolutional gridding edge detection algorithm developed in this paper provides an efficient and robust way to calculate edges. We demonstrate our technique in one and two dimensional examples.  相似文献   

2.
In several applications, data are collected in the frequency (Fourier) domain non-uniformly, either by design or as a consequence of inexact measurements. The two major bottlenecks for image reconstruction from non-uniform Fourier data are (i) there is no obvious way to perform the numerical approximation, as the non-uniform Fourier data is not amenable to fast transform techniques and resampling the data first to uniform spacing is often neither accurate or robust; and (ii) the Gibbs phenomenon is apparent when the underlying function (image) is piecewise smooth, an occurrence in nearly every application. Recent investigations suggest that it may be useful to view the non-uniform Fourier samples as Fourier frame coefficients when designing reconstruction algorithms that attempt to mitigate either of these fundamental problems. The inverse polynomial reconstruction method (IPRM) was developed to resolve the Gibbs phenomenon in the reconstruction of piecewise analytic functions from spectral data, notably Fourier data. This paper demonstrates that the IPRM is also suitable for approximating the finite inverse Fourier frame operator as a projection onto the weighted \(L_2\) space of orthogonal polynomials. Moreover, the IPRM can also be used to remove the Gibbs phenomenon from the Fourier frame approximation when the underlying function is piecewise smooth. The one-dimensional numerical results presented here demonstrate that using the IPRM in this way yields a robust, stable, and accurate approximation from non-uniform Fourier data.  相似文献   

3.
The concentration method of edge detection was developed to compute the locations and values of jump discontinuities in a piecewise-analytic function from its first few Fourier series coefficients. The accuracy and characteristic features of the resulting jump approximation depend on Fourier space “filter” factors known as concentration factors. In this paper, we provide a flexible, iterative framework for the design of these factors. Previously devised concentration factors are shown to be the solutions of specific problem formulations within this new framework. We also provide sample formulations of the procedure applicable to the design of concentration factors for data with missing spectral bands. Several illustrative examples are used to demonstrate the capabilities of the method.  相似文献   

4.
We present a new method for estimating the edges in a piecewise smooth function from blurred and noisy Fourier data. The proposed method is constructed by combining the so called concentration factor edge detection method, which uses a finite number of Fourier coefficients to approximate the jump function of a piecewise smooth function, with compressed sensing ideas. Due to the global nature of the concentration factor method, Gibbs oscillations feature prominently near the jump discontinuities. This can cause the misidentification of edges when simple thresholding techniques are used. In fact, the true jump function is sparse, i.e. zero almost everywhere with non-zero values only at the edge locations. Hence we adopt an idea from compressed sensing and propose a method that uses a regularized deconvolution to remove the artifacts. Our new method is fast, in the sense that it only needs the solution of a single l 1 minimization. Numerical examples demonstrate the accuracy and robustness of the method in the presence of noise and blur.  相似文献   

5.
倪博溢  萧德云 《自动化学报》2009,35(12):1520-1527
在非均匀采样系统辨识方法中, 通常利用重采样、数值积分等方法来处理非均匀采样数据, 所用模型多为连续有理分式传递函数, 在递推形式下非均匀采样对象又常局限于``数据缺失'的情况. 本文研究更为一般的异步非均匀采样的多变量系统, 采用连续时间状态空间模型描述, 推导了模型参数、参数梯度和系统状态之间的相互递推关系, 构成一种可变迭代间隔的递推辨识算法, 在每次输出采样点上仅更新模型中受当前采样数据影响的参数. 这种辨识方法可以适用于任意非均匀采样系统, 多采样率系统也可作为一种特例适用于本算法. 仿真结果表明, 所提的方法是可行有效的.  相似文献   

6.
Data of piecewise smooth images are sometimes acquired as Fourier samples. Standard reconstruction techniques yield the Gibbs phenomenon, causing spurious oscillations at jump discontinuities and an overall reduced rate of convergence to first order away from the jumps. Filtering is an inexpensive way to improve the rate of convergence away from the discontinuities, but it has the adverse side effect of blurring the approximation at the jump locations. On the flip side, high resolution post processing algorithms are often computationally cost prohibitive and also require explicit knowledge of all jump locations. Recent convex optimization algorithms using \(l^1\) regularization exploit the expected sparsity of some features of the image. Wavelets or finite differences are often used to generate the corresponding sparsifying transform and work well for piecewise constant images. They are less useful when there is more variation in the image, however. In this paper we develop a convex optimization algorithm that exploits the sparsity in the edges of the underlying image. We use the polynomial annihilation edge detection method to generate the corresponding sparsifying transform. Our method successfully reduces the Gibbs phenomenon with only minimal blurring at the discontinuities while retaining a high rate of convergence in smooth regions.  相似文献   

7.
Generalized predictive control for non-uniformly sampled systems   总被引:9,自引:0,他引:9  
In this paper, we study digital control systems with non-uniform updating and sampling patterns, which include multirate sampled-data systems as special cases. We derive lifted models in the state-space domain. The main obstacle for generalized predictive control (GPC) design using the lifted models is the so-called causality constraint. Taking into account this design constraint, we propose a new GPC algorithm, which results in optimal causal control laws for the non-uniformly sampled systems. The solution applies immediately to multirate sampled-data systems where rates are integer multiples of some base period.  相似文献   

8.
Spectral reprojection techniques make possible the recovery of exponential accuracy from the partial Fourier sum of a piecewise-analytic function, essentially conquering the Gibbs phenomenon for this class of functions. This paper extends this result to non-harmonic partial sums, proving that spectral reprojection can reduce the Gibbs phenomenon in non-harmonic reconstruction as well as remove reconstruction artifacts due to erratic sampling. We are particularly interested in the case where the Fourier samples form a frame. These techniques are motivated by a desire to improve the quality of images reconstructed from non-uniform Fourier data, such as magnetic resonance (MR) images.  相似文献   

9.
\(L_1\) regularization is widely used in various applications for sparsifying transform. In Wasserman et al. (J Sci Comput 65(2):533–552, 2015) the reconstruction of Fourier data with \(L_1\) minimization using sparsity of edges was proposed—the sparse PA method. With the sparse PA method, the given Fourier data are reconstructed on a uniform grid through the convex optimization based on the \(L_1\) regularization of the jump function. In this paper, based on the method proposed by Wasserman et al. (J Sci Comput 65(2):533–552, 2015) we propose to use the domain decomposition method to further enhance the quality of the sparse PA method. The main motivation of this paper is to minimize the global effect of strong edges in \(L_1\) regularization that the reconstructed function near weak edges does not benefit from the sparse PA method. For this, we split the given domain into several subdomains and apply \(L_1\) regularization in each subdomain separately. The split function is not necessarily periodic, so we adopt the Fourier continuation method in each subdomain to find the Fourier coefficients defined in the subdomain that are consistent to the given global Fourier data. The numerical results show that the proposed domain decomposition method yields sharp reconstructions near both strong and weak edges. The proposed method is suitable when the reconstruction is required only locally.  相似文献   

10.
A mathematical model, for which rigorous methods of statistical inference are available, is described and techniques for image enhancement and linear discriminant analysis of groups are developed. Since the gray values of neighboring pixels in tomographically produced medical images are spatially correlated, the calculations are carried out in the Fourier domain to insure statistical independence of the variables. Furthermore, to increase the power of statistical tests the known spatial covariance was used to specify constraints in the spectral domain. These methods were compared to statistical procedures carried out in the spatial domain. Positron emission tomography (PET) images of alcoholics with organic brain disorders were compared by these techniques to age-matched normal volunteers. Although these techniques are employed to analyze group characteristics of functional images, they provide a comprehensive set of mathematical and statistical procedures in the spectral domain that can also be applied to images of other modalities, such as computed tomography (CT) or magnetic resonance imaging (MRI).  相似文献   

11.
Hypercomplex Fourier transforms are increasingly used in signal processing for the analysis of higher-dimensional signals such as color images. A main stumbling block for further applications, in particular concerning filter design in the Fourier domain, is the lack of a proper convolution theorem. The present paper develops and studies two conceptually new ways to define convolution products for such transforms. As a by-product, convolution theorems are obtained that will enable the development and fast implementation of new filters for quaternionic signals and systems, as well as for their higher dimensional counterparts.  相似文献   

12.
Compressed sensing(CS)is a new technique of utilizing a priori knowledge on sparsity of data in a certain domain for minimizing necessary number of measurements.Based on this idea,this paper proposes a novel synthetic aperture radar(SAR)imaging approach by exploiting sparseness of echo data in the fractional Fourier domain.The effectiveness and robustness of the approach are assessed by some numerical experiments under various noisy conditions and different measurement matrices.Experimental results have shown that,the obtained images by using the CS technique depend on measurement matrix and have higher output signal to noise ratio than traditional pulse compression technique.Finally simulated and real data are also processed and the achieved results show that the proposed approach is capable of reconstructing the image of targets and effectively suppressing noise.  相似文献   

13.
Signal-processing modules working directly on encrypted data provide an elegant solution to application scenarios where valuable signals must be protected from a malicious processing device. In this paper, we investigate the implementation of the discrete Fourier transform (DFT) in the encrypted domain by using the homomorphic properties of the underlying cryptosystem. Several important issues are considered for the direct DFT: the radix-2 and the radix-4 fast Fourier algorithms, including the error analysis and the maximum size of the sequence that can be transformed. We also provide computational complexity analyses and comparisons. The results show that the radix-4 fast Fourier transform is best suited for an encrypted domain implementation in the proposed scenarios.   相似文献   

14.
The distinct feature of data acquisition for magnetic resonance imaging (MRI) is that the data are sampled in frequency domain instead of spatial domain. Therefore, the acquired data must be inverse Fourier transformed to generate images. To apply fast Fourier transform (FFT), the data are usually acquired on rectilinear grids. However, acquiring data on rectilinear grids is not very efficient in MRI. A spiral trajectory, which starts at the origin of the frequency domain and spins out to higher spatial frequency is more efficient and faster than the conventional method. Since the spiral trajectories do not sample on rectilinear grids, raw data must be re-interpolated onto rectilinear grids prior to inverse FFT. This re-gridding process is done using a reconstruction program. When the platforms to run the program grow, the efforts required to maintain the program become prohibitive. This problem can be solved through the platform-independent Java programming language. In this paper, we report on our attempt to implement the spiral MRI reconstruction program in Java. We show that the performance is not significantly affected and that it is practical to use a platform-independent reconstruction software.  相似文献   

15.
传统多旋翼无人机循迹检测系统在循迹数据检测方面存在数据动态性强,变数大的问题,影响循迹检测整体效果。为此提出基于离散傅里叶变换的多旋翼无人机循迹检测系统设计;设计系统主要分为检测硬件与执行程序两部分,其中,检测硬件分别设计了数据传感器、数据采集模块、数据逻辑变阻器与信号分量控制器模块,通过各模块共同实现信号分量控制器调制输出;软件部分通过离散算法设计循迹数据离散处理程序,通过离散傅里叶变换对循迹数据传输通道内的回路进行闭环转换,设计循迹数据傅里叶变换输出程序,实现多旋翼无人机循迹检测系统设计;通过实验数据的对比表明,设计系统优化了多旋翼无人机循迹数据识别精度,能够有效提升系统整体循迹效率,具有实际应用性。  相似文献   

16.
《Parallel Computing》1988,6(2):225-233
Fast Fourier Transforms are a widely-used and powerful tool for the analysis and solution of many problems. They have been used in such diverse areas as medicine, acoustics, image processing, system design and many other fields. By transforming the data the problem may be simpler, more tractable or more efficiently solved and for many applications (e.g. speech processing) the data may be much more easily understood in the transform domain. Therefore fast algorithms for implementing transforms are vital for any powerful computer.This paper describes the implementation of a Fast Fourier Transform on a 64-node INTEL hypercube and shows how the hypercube architecture may be efficiently used. Usually the FFT is only a part of the solution process and the data on the hypercube has to be arranged in a certain manner for the efficient solution of the whole problem. A common way is for the data to be distributed according to a Gray code, so that neighbouring points in the domain are in neighbouring processors. We present a number of results on Gray codes which characterise a certain family of Gray codes, and show that Fast Fourier Transforms on data distributed among processors according to a Gray code can also be efficiently implemented on the hypercube.  相似文献   

17.
A new approach for objectively analyzing the aggregation of acetylcholine receptors (AChRs) through power spectrum analysis derived from fast Fourier transform (FFT) of images has been developed. Presently, detection of AChR aggregates at neuromuscular junctions is not easily accomplished. Though the formation of AChR clusters results in periodic gray-level variations that differ with time, no study reporting their correlation with frequency information in the Fourier domain for aggregates' detection purposes exists. To this end, we processed time-lapse images of AChR aggregates' formation on murine myotubes to extract peak values of power spectra. To validate interpretation of the Fourier spectra analysis, a computer routine was developed to semi-automatically count AChR aggregates. We found: (1) logarithmic maxima of Fourier spectra correlated significantly with experimentation time; (2) cluster count correlated significantly with time only after clusters were discernable from images, signifying that this method heavily depended on definitive growth data and thresholding values; (3) exponents of Fourier maxima versus time and cluster count versus time profiles during this phase compared favorably, indicating that both methods were analyzing identical cluster growth rates. Our observations suggest that analysis via FFT power spectrum is sensitive and robust enough to automatically quantify AChR aggregates.  相似文献   

18.
Edge detection from Fourier spectral data is important in many applications including image processing and the post-processing of solutions to numerical partial differential equations. The concentration method, introduced by Gelb and Tadmor in 1999, locates jump discontinuities in piecewise smooth functions from their Fourier spectral data. However, as is true for all global techniques, the method yields strong oscillations near the jump discontinuities, which makes it difficult to distinguish true discontinuities from artificial oscillations. This paper introduces refinements to the concentration method to reduce the oscillations. These refinements also improve the results in noisy environments. One technique adds filtering to the concentration method. Another uses convolution to determine the strongest correlations between the waveform produced by the concentration method and the one produced by the jump function approximation of an indicator function. A zero crossing based concentration factor, which creates a more localized formulation of the jump function approximation, is also introduced. Finally, the effects of zero-mean white Gaussian noise on the refined concentration method are analyzed. The investigation confirms that by applying the refined techniques, the variance of the concentration method is significantly reduced in the presence of noise. This work was partially supported by NSF grants CNS 0324957, DMS 0510813, DMS 0652833, and NIH grant EB 025533-01 (AG).  相似文献   

19.
Edge detection is an essential task in image processing. In some applications, such as Magnetic Resonance Imaging, the information about an image is available only through its frequency (Fourier) data. In this case, edge detection is particularly challenging, as it requires extracting local information from global data. The problem is exacerbated when the data are noisy. This paper proposes a new edge detection algorithm which combines the concentration edge detection method (Gelb and Tadmor in Appl. Comput. Harmon. Anal. 7:101–135, 1999) with statistical hypothesis testing. The result is a method that achieves a high probability of detection while maintaining a low probability of false detection.  相似文献   

20.
Fourier polygons     
Polygons are everywhere, but one place the author didn't expect to see polygons is in the Fourier transform, but he found them there as well. The Fourier transform is an indispensable tool in signal processing. In computer graphics, it helps us understand and cure problems as diverse as jaggies on the edge of polygons, blocky looking textures, and animated objects that appear to jump erratically as they move across the screen. His friend and colleague Alvy Ray Smith recently wrote a memo that demonstrated a surprising interpretation of the Fourier transform. He showed how in some circumstances the Fourier transform looks like nothing more than operations on regular polygons. The article is about that fascinating insight. He starts off with by using complex numbers to do geometry and then moves on to the Fourier series, building up to a discussion of the new interpretation  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号