首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 510 毫秒
1.
《成像科学杂志》2013,61(7):592-600
Abstract

Segmentation is one of the most complicated procedures in the image processing that has important role in the image analysis. In this paper, an improved pixon-based method for image segmentation is proposed. In proposed algorithm, complex partial differential equations (PDEs) is used as a kernel function to make pixonal image. Using this kernel function causes noise on images to reduce and an image not to be over-segment when the pixon-based method is used. Utilising the PDE-based method leads to elimination of some unnecessary details and results in a fewer pixon number, faster performance and more robustness against unwanted environmental noises. As the next step, the appropriate pixons are extracted and eventually, we segment the image with the use of a Markov random field. The experimental results indicate that the proposed pixon-based approach has a reduced computational load and a better accuracy compared to the other existing pixon-image segmentation techniques. To evaluate the proposed algorithm and compare it with the last best algorithms, many experiments on standard images were performed. The results indicate that the proposed algorithm is faster than other methods, with the most segmentation accuracy.  相似文献   

2.
Abstract

This paper discusses the effect of an image's non-zero local mean value on the mean square estimation error (MSEE) and the steady-state weights of the two-dimensional least mean square (TDLMS) algorithm [1] when used in image processing. It shows that the local mean causes an increase in the MSEE which is proportional to the square of the image's local mean value. This causes the filter to converge to non-optimum weights and the shift from the optimum values is proportional to the square of the image's local mean value. It is shown that this effect can be reduced by normalizing the filter's weights to unity.  相似文献   

3.
邵雪  曾台英  汪祖辉 《包装工程》2016,37(15):40-45
目的图像质量的优劣不仅与失真有关,同时与亮度图像的质量有关,而无参考图像质量评价中未考虑到亮度图像的质量对图像整体质量评价的影响,因此引入亮度阈值效应对其亮度图像的质量进行量化评价。方法在BRISQUE算法的基础上进行改进,以快速衰落失真为例,在调整亮度后获取的50幅图像库中进行实验,将失真图像分层为入射分量和反射分量,对入射分量(亮度图像)采用亮度阈值算法,反射分量(反射图像)采用BRISQUE算法,提出一种新的无参考图像质量评价方法。结果文中算法的皮尔逊相关系数(PCC)为0.9982,斯皮尔曼秩相关系数(SROCC)为0.9741。结论由实验数据可知,文中算法在人眼视觉的主观评价上相较于BRISQUE算法有更好的一致性,符合人眼的视觉感知。  相似文献   

4.
新媒体语境下图形图像语言在平面设计中的蜕变   总被引:1,自引:1,他引:0  
目的研究平面设计中的图形图像语言在新媒体语境下从传统静止状态到动态图形的转变。方法通过分析图形从静态到动态的转变,讨论转变的时代性是平面设计发展的必然趋势。对照动态图形的概念,以动态标志、动态海报和品牌APP等为研究对象,归纳总结动态图形的在平面设计运用中的特点。结论新媒体语境下,平面设计中图形语言的表现方式发生了由"静"到"动"的革命性变化,这种变化标志着传统平面设计进入了新的时代。  相似文献   

5.
For the purpose of ultrasonic nondestructive testing of materials, holography in connection with digital reconstruction algorithms has been proposed as a modern tool to extract crack sizes from ultrasonic scattering data. Defining the typical holographic reconstruction algorithm as the application of the scalar Kirchhoff diffraction theory to backward wave propagation, we demonstrate its general incapability of reconstructing equivalent sources, and hence, geometries of scattering bodies. Only the special case of a planar measurement recording surface, that is to say, a hologram plane, and a planar crack with perfectly rigid boundary conditions parallel to the hologram plane and perpendicular to the incident field yields a nearly perfect correlation between crack size and reconstructed image; the reconstruction algorithm is then referred to as the Rayleigh-Sommerfeld formula; it therefore represents the optimal case matched to that special geometrical situation and, hence, may be interpreted as a quasi-matched spatial filter. Using integral equation theory and physical optics, we compute synthetic holographic data for a linear cracklike scatterer for both plane and spherical wave incidence, the latter case simulating a synthetic aperture impulse echo situation, thus illustrating how the Rayleigh-Sommerfeld algorithm or its Fresnel approximation increasingly fail for cracks inclined to the hologram plane and excited nonperpendicularly. Furthermore, we point out how the physical data recording process may additionally influence the reconstruction accuracy, and, finally, guidelines for a careful and serious application of these holographic reconstruction algorithms are given. The theoretical results are supported by measurements.  相似文献   

6.
The generalized spectral decomposition (GSD) theorem is introduced, and the generalized fundamental stimulus and metameric black are analyzed to show how they convey the valuable features in terms of color information. The suggestion would be considered as the generalization of Cohen and Kappauf's matrix R theory and its later application in parameric correction by Fairman. The GSD theorem provides a modular model whose arguments can be elaborately set up for high-performance spectral recovery. It is also shown that the suggested methods for spectral decomposition and/or spectral reconstruction proposed by different researchers could be considered as special cases of GSD.  相似文献   

7.
基于SVM的ECT图像重建算法   总被引:2,自引:0,他引:2  
何世钧  王化祥  周勋 《计量学报》2007,28(2):137-140
电容层析成像(ECT)技术是基于电容敏感机理的过程层析成像技术。ECT的图像重建是一个典型的有限样本非线性映射问题。支持向量机(SVM)作为一种小样本处理方法,具有较强的泛化能力,被认为是目前针对小样本分类问题的最佳理论。提出了一种基于SVM的四层神经网络的图像重建算法,仿真结果表明,该算法用于三相流图像重建具有较强的空间分辨率和泛化能力。  相似文献   

8.
Tong (1975) has proposed a procedure for estimating the order of a Markov chain based on Akaike's information criterion (AIC). In this paper, the asymptotic distribution of the AIC estimator is derived and it is shown that the estimator is inconsistent. As an alternative to the AIC procedure, the Bayesian information criterion (BIC) proposed by Schwarz (1978) is shown to be consistent. These two procedures yield different estimated orders when applied to specific samples of meteorological observations. For parameters based on these meteorological examples, the AIC and BIC procedures are compared by means of simulation for finite samples. The results obtained have practical implications concerning whether, in the routine fitting of precipitation data, it is necessary to consider higher than first-order Markov chains.  相似文献   

9.
At present, people are inclined to use one saliency detection method to cover all the pixels in an image. However, every method has its own limitations. A single method may not yield a good performance at all image scenes. In this article, we propose a new adaptive framework to detect salient objects. For each pixel in an image, it adaptively selects an appropriate method according to the pixel context relationship. In our framework, an image is characterized by a set of binary maps, which are generated by randomly thresholding the image's initial saliency map. And then, we utilize the surroundedness cue, which are obtained by a series of operations on the binary maps, to classify all the pixels in an image. Furthermore, based on the classes, we choose methods to detect salient objects. Extensive experimental results on three benchmark datasets demonstrate that our method performs favorable against 11 state-of-the-art methods.  相似文献   

10.
Contextual compression is an essential part of any medical image compression since it facilitates no loss of diagnostic information. Although there are many techniques available for contextual image compression still there is a need for developing an efficient and optimized technique which would produce good quality images at lower bit rates. This article presents an efficient contextual compression algorithm using wavelet and contourlet transforms to capture the fine details of the image, along with directional information to produce good quality at high Compression Ratio (CR). The 2D discrete wavelet transform, which uses the simplest Daubechies wavelets, db1, or haar wavelet, is chosen and used to get the subband coefficients. The approximate coefficients of the higher subbands undergo contourlet transform employing length N ladder filters for capturing the directional information of the subbands at different scale and orientations. An optimized approach is used for predicting the quantized and the normalized subband coefficients resulting in improved compression performance. The proposed contextual compression approach was evaluated for its performance in terms of CR, Peak Signal to Noise Ratio, Feature SIMilarity index, Structure SIMilarity Index, and Universal quality (Q) after reconstruction. The results clarify the efficiency of the proposed method over other compression techniques.  相似文献   

11.
A new assay method for the nondestructive determination of pharmaceutical samples with different concentrations on the basis of the near-infrared (NIR) spectral data is presented in this paper. By the proposed method, powerful radial basis function (RBF) networks can be produced based on a genetic algorithm (GA), which is applied for auto-configuring the structure of the networks and obtaining the optimal network parameters. The Akaike's information criterion (AIC) is used to evaluate the fitness of individual networks. Therefore, the genetic algorithm-radial basis function (GA-RBF) networks have a better generalization performance and simpler network structure. Four different GA-RBF network models based on pretreated spectra (multiplicative scatter correction MSC, standard normal variate SNV, first-derivative and second-derivative spectra) have been established and compared. The obtained GA-RBF networks can give robust and satisfactory prediction and the optimal GA-RBF networks after the SNV treatment is found to provide the best results. It is demonstrated that the proposed GA-RBF method based on NIR spectral data is a valuable tool for quantitative analysis.  相似文献   

12.
《Photographies》2013,6(1):9-28
Twenty‐two years since the arrival of the first consumer digital camera, Western culture is now characterized by ubiquitous photography. The disappearance of the camera inside the mobile phone has ensured that even the most banal moments of the day can become a point of photographic reverie, potentially shared instantly. Supported by the increased affordability of computers, digital storage and access to broadband, consumers are provided with new opportunities for the capture and transmission of images, particularly online where snapshot photography is being transformed from an individual to a communal activity. As the digital image proliferates online and becomes increasingly delivered via networks, numerous practices emerge surrounding the image's transmission, encoding, ordering and reception. Informing these practices is a growing cultural shift towards a conception of the Internet as a platform for sharing and collaboration, supported by a mosaic of technologies termed Web 2.0. In this article we attempt to delineate the field of snapshot photography as this practice shifts from primarily being a print‐oriented to a transmission‐oriented, screen‐based experience. We observe how the alignment of the snapshot with the Internet results in the emergence of new photographies in which the photographic image interacts with established and experimental media forms – raising questions about the ways in which digital photography is framed institutionally and theoretically.  相似文献   

13.

Methods of noisy image filtration using wavelet transforms with real and complex basis sets have been compared. It is shown that the use of a complex wavelet transform provides more effective filtration and admits automatic optimization of the filter parameters. Optimized choice of the threshold level during filtration based on a complex wavelet transform significantly decreases the error of image reconstruction as compared to that achieved with a standard method of discrete wavelet transform employing basis sets of the Daubechies wavelet family.

  相似文献   

14.
以缅甸莱比塘铜矿为研究对象,开展露天爆破粉尘测量方法的研究。采用数码相机对爆破粉尘进行摄影,获取了不同浓度粉尘图像灰度值数据,通过对粉尘浓度实际测量,建立了粉尘浓度值与图像灰度值的数学关系,分析粉尘图像灰度特征获得了粉尘浓度空间分布规律,计算得到起爆后某一时刻爆区空气爆破粉尘总量。研究结果表明:爆区正侧方50m处起爆后96s内平均粉尘浓度为1 602mg/m~3;粉尘浓度与粉尘图像灰度的比例系数k为9.117mg/m~3;起爆后30s时形成粉尘体的浓度由内向外逐渐降低,边缘浓度急剧降低,宽度方向浓度近似相等;起爆后30s时空气中爆破粉尘总量为864.91kg。  相似文献   

15.
放松正统小波变换的正交性条件而构建了W-变换,用此技术处理图像压缩。算法的实现对多种性质的图像均能有效处理。  相似文献   

16.
简献忠  张雨墨  王如志 《包装工程》2020,41(11):239-245
目的为了解决传统压缩感知图像重构方法存在的重构时间长、重构图像质量不高等问题,提出一种基于生成对抗网络的压缩感知图像重构方法。方法基于生成对抗网络思想设计一种由具有稀疏采样功能的鉴别器和具有图像重构功能的生成器组成的深度学习网络模型,利用对抗损失和重构损失2个部分组成的新的损失函数对网络参数进行优化,完成图像压缩重构过程。结果实验表明,文中方法在12.5%的低采样率下重构时间为0.009s,相较于常用的OMP算法、CoSaMP算法、SP算法和IRLS算法,其峰值信噪比(PSNR)提高了10~12 dB。结论文中设计的方法应用于图像重构时重构时间短,在低采样率下仍能获得高质量的重构效果。  相似文献   

17.
A three-dimensional tomographic reconstruction algorithm for an absorptive perturbation in tissue is derived. The input consists of multiple two-dimensional projected views of tissue that is backilluminated with diffuse photon density waves. The algorithm is based on a generalization of the projection-slice theorem and consists of depth estimation, image deconvolution, filtering, and backprojection. The formalism provides estimates of the number of views necessary to achieve a given spatial resolution in the reconstruction. The algorithm is demonstrated with data simulated to mimic the absorption of a contrast agent in human tissue. The effects of noise and uncertainties in the depth estimate are explored.  相似文献   

18.
This paper describes the use of Computational Fluid Dynamics (CFD) and mathematical optimization techniques to minimize pollution due to industrial sources like stacks. The optimum placement of a new pollutant source (e.g. a new power plant with its stacks) depends on many parameters. These include stack height, stack distance from surrounding populated areas, barriers, local meteorological conditions, etc. As an experimental approach is both time‐consuming and costly, use is made of numerical techniques. Using CFD without optimization on a trial‐and‐error basis, however, does not guarantee optimal solutions. A better approach, that until recently has been too expensive, is to combine CFD with mathematical optimization techniques, thereby incorporating the influence of the variables automatically. The current study investigates a simplified two‐dimensional case of the minimisation of pollutant stack distance to a street canyon with or without barrier for a given maximum ground‐level concentration of pollutants in a street canyon. Two to five design variables are considered. The CFD simulation uses the STAR‐CD code with RNG kϵ turbulence model. Making use of initial field restarts drastically reduces CFD solution time. The optimization is carried out by means of Snyman's DYNAMIC‐Q method, which is specifically designed to handle constrained problems where the objective or constraint functions are expensive to evaluate. The paper illustrates how the parameters considered influence the stack placement and how these techniques can be used by the environmental engineer to perform impact studies of new pollutant sources. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

19.
This paper presents a model for calculating the optimal cutting feed rate, spindle speed, and periodic control interval for a standalone cutting machine. The optimal cutting conditions are determined for the maximum expected 'profit rate' criterion, under the assumption that the Normal distribution function represents the tool-life distribution. When the cutting operation and the load/unload process are preformed automatically without any supervisor, it is not necessary to employ a full-time operator for the machine. Thus, we used the Periodic Control Strategy, under which the operator attends to the cutting machine only at predefined calendar times. We have developed an approximate model that was studied using simulations. The model appears to be very efficient.  相似文献   

20.
常敏  陈果  韩帅 《包装工程》2020,41(15):239-244
目的研究利用深度学习辅以拉普拉斯金字塔来完成图像压缩与重构。方法利用卷积神经网络提取图像的主要特征,利用双三线性插值法来减少特征尺寸,使用拉普拉斯金字塔来构建分层体系,从而逐步地减少图像大小以达到压缩的目的。在重构端上,对此系统则进行卷积操作,并采用上采样过程,进行图像的恢复重构过程,得到重构图。结果采用来自法国贝尔实验室的set 5与set 14数据集进行验证,使用2层金字塔即在16倍的高倍率压缩下进行实验结果验证,结果表明在主观评价上使用深度学习的方法在清晰度和还原度上要优于PCA,DCT和SVD,同时在客观评价上文中方法取得了标准差(52.73)与信息熵(7.44)的最好结果,高于PCA的49.70与7.38。SVD变换法与DCT变换法,在标准差上只有48.69和49.02,远不如文中方法,同时图片的信息熵只有7.34与7.35,低于文中的7.44。结论利用拉普拉斯金字塔结构来设计卷积神经网络结构来完成图像压缩与重构取得了不错的效果。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号