首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 421 毫秒
1.
Super‐resolution (SR) software‐based techniques aim at generating a final image by combining several noisy frames with lower resolution from the same scene. A comparative study on high‐resolution high‐angle annular dark field images of InAs/GaAs QDs has been carried out in order to evaluate the performance of the SR technique. The obtained SR images present enhanced resolution and higher signal‐to‐noise (SNR) ratio and sharpness regarding the experimental images. In addition, SR is also applied in the field of strain analysis using digital image processing applications such as geometrical phase analysis and peak pairs analysis. The precision of the strain mappings can be improved when SR methodologies are applied to experimental images.  相似文献   

2.
A method to measure the degree of co-localization of objects in confocal dual-colour images has been developed. This image analysis produced two coefficients that represent the fraction of co-localizing objects in each component of a dual-channel image. The generation of test objects with a Gaussian intensity distribution, at well-defined positions in both components of dual-channel images, allowed an accurate investigation of the reliability of the procedure. To do that, the co-localization coefficients were determined before degrading the image with background, cross-talk and Poisson noise. These synthesized sources of image deterioration represent sources of deterioration that must be dealt with in practical confocal imaging, namely dark current, non-specific binding and cross-reactivity of fluorescent probes, optical cross-talk and photon noise. The degraded images were restored by filtering and cross-talk correction. The co-localization coefficients of the restored images were not significantly different from those of the original undegraded images. Finally, we tested the procedure on images of real biological specimens. The results of these tests correspond with data found in the literature. We conclude that the co-localization coefficients can provide relevant quantitative information about the positional relation between biological objects or processes.  相似文献   

3.
A new technique to quantify signal‐to‐noise ratio (SNR) value of the scanning electron microscope (SEM) images is proposed. This technique is known as autocorrelation Levinson–Durbin recursion (ACLDR) model. To test the performance of this technique, the SEM image is corrupted with noise. The autocorrelation function of the original image and the noisy image are formed. The signal spectrum based on the autocorrelation function of image is formed. ACLDR is then used as an SNR estimator to quantify the signal spectrum of noisy image. The SNR values of the original image and the quantified image are calculated. The ACLDR is then compared with the three existing techniques, which are nearest neighbourhood, first‐order linear interpolation and nearest neighbourhood combined with first‐order linear interpolation. It is shown that ACLDR model is able to achieve higher accuracy in SNR estimation.  相似文献   

4.
为确保航空遥感图像产品相对辐射质量,提出了一种对两通道按列输出CMOS(互补金属氧化物半导体)图像的非均匀性进行校正的方法。以美国仙童公司CMOS探测器CIS2521F为例,通过实验室积分球观测试验研究了暗电流噪声、平均灰度、两通道输出等因素造成的图像非均匀性;然后,基于实验室积分球观测数据,采用两点线性法,校正了由按列放大输出导致的列状条带噪声;接着,通过优化拼接线附近图像灰度差异统计结果,校正了两通道响应不一致造成的图像辐射差异。试验表明,单通道图像非均匀校正使积分球观测图像的平均非均匀度量值由4.4下降至2.4,两通道图像非均匀校正消除了两通道图像的目视差异。原始航空遥感图像经过非均匀性校正后,图像灰度均匀,能够满足遥感图像判读要求。  相似文献   

5.
Background and noise impair image quality by affecting resolution and obscuring image detail in the low intensity range. Because background levels in unprocessed confocal images are frequently at about 30% maximum intensity, colocalization analysis, a typical segmentation process, is limited to high intensity signal and prone to noise‐induced, false‐positive events. This makes suppression or removal of background crucial for this kind of image analysis. This paper examines the effects of median filtering and deconvolution, two image‐processing techniques enhancing the signal‐to‐noise ratio (SNR), on the results of colocalization analysis in confocal data sets of biological specimens. The data show that median filtering can improve the SNR by a factor of 2. The technique eliminates noise‐induced colocalization events successfully. However, because filtering recovers voxel values from the local neighbourhood false‐negative (‘dissipation’ of signal intensity below threshold value) as well as false‐positive (‘fusion’ of noise with low intensity signal resulting in above threshold intensities), results can be generated. In addition, filtering involves the convolution of an image with a kernel, a procedure that inherently impairs resolution. Image restoration by deconvolution avoids both of these disadvantages. Such routines calculate a model of the object considering various parameters that impair image formation and are able to suppress background down to very low levels (< 10% maximum intensity, resulting in a SNR improved by a factor 3 as compared to raw images). This makes additional objects in the low intensity but high frequency range available to analysis. In addition, removal of noise and distortions induced by the optical system results in improved resolution, which is of critical importance in cases involving objects of near resolution size. The technique is, however, sensitive to overestimation of the background level. In conclusion, colocalization analysis will be improved by deconvolution more than by filtering. This applies especially to specimens characterized by small object size and/or low intensities.  相似文献   

6.
New microscopy technologies are enabling image acquisition of terabyte‐sized data sets consisting of hundreds of thousands of images. In order to retrieve and analyze the biological information in these large data sets, segmentation is needed to detect the regions containing cells or cell colonies. Our work with hundreds of large images (each 21 000×21 000 pixels) requires a segmentation method that: (1) yields high segmentation accuracy, (2) is applicable to multiple cell lines with various densities of cells and cell colonies, and several imaging modalities, (3) can process large data sets in a timely manner, (4) has a low memory footprint and (5) has a small number of user‐set parameters that do not require adjustment during the segmentation of large image sets. None of the currently available segmentation methods meet all these requirements. Segmentation based on image gradient thresholding is fast and has a low memory footprint. However, existing techniques that automate the selection of the gradient image threshold do not work across image modalities, multiple cell lines, and a wide range of foreground/background densities (requirement 2) and all failed the requirement for robust parameters that do not require re‐adjustment with time (requirement 5). We present a novel and empirically derived image gradient threshold selection method for separating foreground and background pixels in an image that meets all the requirements listed above. We quantify the difference between our approach and existing ones in terms of accuracy, execution speed, memory usage and number of adjustable parameters on a reference data set. This reference data set consists of 501 validation images with manually determined segmentations and image sizes ranging from 0.36 Megapixels to 850 Megapixels. It includes four different cell lines and two image modalities: phase contrast and fluorescent. Our new technique, called Empirical Gradient Threshold (EGT), is derived from this reference data set with a 10‐fold cross‐validation method. EGT segments cells or colonies with resulting Dice accuracy index measurements above 0.92 for all cross‐validation data sets. EGT results has also been visually verified on a much larger data set that includes bright field and Differential Interference Contrast (DIC) images, 16 cell lines and 61 time‐sequence data sets, for a total of 17 479 images. This method is implemented as an open‐source plugin to ImageJ as well as a standalone executable that can be downloaded from the following link: https://isg.nist.gov/ .  相似文献   

7.
A new method based on nonlinear least squares regression (NLLSR) is formulated to estimate signal‐to‐noise ratio (SNR) of scanning electron microscope (SEM) images. The estimation of SNR value based on NLLSR method is compared with the three existing methods of nearest neighbourhood, first‐order interpolation and the combination of both nearest neighbourhood and first‐order interpolation. Samples of SEM images with different textures, contrasts and edges were used to test the performance of NLLSR method in estimating the SNR values of the SEM images. It is shown that the NLLSR method is able to produce better estimation accuracy as compared to the other three existing methods. According to the SNR results obtained from the experiment, the NLLSR method is able to produce approximately less than 1% of SNR error difference as compared to the other three existing methods.  相似文献   

8.
The effect of shot noise and emission noise due to materials that have different emission properties was simulated. Local variations in emission properties affect the overall signal‐to‐noise ratio (SNR) value of the scanning electron microscope image. In the case in which emission noise is assumed to be absent, the image SNRs for silicon and gold on a black background are identical. This is because only shot noise in the primary beam affects the SNRs, irrespective of the assumed noiseless secondary electron emission or backscattered electron emission processes. The addition of secondary emission noise degrades the SNR. Materials with higher secondary electron yield and backscattering electron yield give rise to higher SNR. For images formed from two types of material, the contrast of the image is lower. The reduction in image signal reduces the overall image SNR. As expected, large differences in δ or η give rise to higher SNR images.  相似文献   

9.
Vignetting of microscopic images impacts both the visual impression of the images and any image analysis applied to it. Especially in high‐throughput screening high demands are made on an automated image analysis. In our work we focused on fluorescent samples and found that two profiles (background and foreground) for each imaging channel need to be estimated to achieve a sufficiently flat image after correction. We have developed a method which runs completely unsupervised on a wide range of assays. By adding a reliable internal quality control we mitigate the risk of introducing artefacts into sample images through correction. The method requires hundreds of images for the foreground profile, thus limiting its application to high‐throughput screening where this requirement is fulfilled in routine operation.  相似文献   

10.
K. S. Sim  M. E. Nia  C. P. Tso 《Scanning》2013,35(3):205-212
A number of techniques have been proposed during the last three decades for noise variance and signal‐to‐noise ratio (SNR) estimation in digital images. While some methods have shown reliability and accuracy in SNR and noise variance estimations, other methods are dependent on the nature of the images and perform well on a limited number of image types. In this article, we prove the accuracy and the efficiency of the image noise cross‐correlation estimation model, vs. other existing estimators, when applied to different types of scanning electron microscope images. SCANNING 35: 205‐212, 2013. © 2012 Wiley Periodicals, Inc.  相似文献   

11.
Automated microscopy system for mosaic acquisition and processing   总被引:2,自引:2,他引:0  
An automatic mosaic acquisition and processing system for a multiphoton microscope is described for imaging large expanses of biological specimens at or near the resolution limit of light microscopy. In a mosaic, a larger image is created from a series of smaller images individually acquired systematically across a specimen. Mosaics allow wide‐field views of biological specimens to be acquired without sacrificing resolution, providing detailed views of biological specimens within context. The system is composed of a fast‐scanning, multiphoton, confocal microscope fitted with a motorized, high‐precision stage and custom‐developed software programs for automatic image acquisition, image normalization, image alignment and stitching. Our current capabilities allow us to acquire data sets comprised of thousands to tens of thousands of individual images per mosaic. The large number of individual images involved in creating a single mosaic necessitated software development to automate both the mosaic acquisition and processing steps. In this report, we describe the methods and challenges involved in the routine creation of very large scale mosaics from brain tissue labelled with multiple fluorescent probes.  相似文献   

12.
Second‐harmonic generation (SHG) microscopy has gained popularity because of its ability to perform submicron, label‐free imaging of noncentrosymmetric biological structures, such as fibrillar collagen in the extracellular matrix environment of various organs with high contrast and specificity. Because SHG is a two‐photon coherent scattering process, it is difficult to define a point spread function (PSF) for this modality. Hence, compared to incoherent two‐photon processes like two‐photon fluorescence, it is challenging to apply the various PSF‐engineering methods to improve the spatial resolution to be close to the diffraction limit. Using a synthetic PSF and application of an advanced maximum likelihood estimation (AdvMLE) deconvolution algorithm, we demonstrate restoration of the spatial resolution in SHG images to that closer to the theoretical diffraction limit. The AdvMLE algorithm adaptively and iteratively develops a PSF for the supplied image and succeeds in improving the signal to noise ratio (SNR) for images where the SHG signals are derived from various sources such as collagen in tendon and myosin in heart sarcomere. Approximately 3.5 times improvement in SNR is observed for tissue images at depths of up to ~480 nm, which helps in revealing the underlying helical structures in collagen fibres with an ~26% improvement in the amplitude contrast in a fibre pitch. Our approach could be adapted to noisy and low resolution modalities such as micro‐nano CT and MRI, impacting precision of diagnosis and treatment of human diseases.  相似文献   

13.
The presence of systematic noise in images in high‐throughput microscopy experiments can significantly impact the accuracy of downstream results. Among the most common sources of systematic noise is non‐homogeneous illumination across the image field. This often adds an unacceptable level of noise, obscures true quantitative differences and precludes biological experiments that rely on accurate fluorescence intensity measurements. In this paper, we seek to quantify the improvement in the quality of high‐content screen readouts due to software‐based illumination correction. We present a straightforward illumination correction pipeline that has been used by our group across many experiments. We test the pipeline on real‐world high‐throughput image sets and evaluate the performance of the pipeline at two levels: (a) Z′‐factor to evaluate the effect of the image correction on a univariate readout, representative of a typical high‐content screen, and (b) classification accuracy on phenotypic signatures derived from the images, representative of an experiment involving more complex data mining. We find that applying the proposed post‐hoc correction method improves performance in both experiments, even when illumination correction has already been applied using software associated with the instrument. To facilitate the ready application and future development of illumination correction methods, we have made our complete test data sets as well as open‐source image analysis pipelines publicly available. This software‐based solution has the potential to improve outcomes for a wide‐variety of image‐based HTS experiments.  相似文献   

14.
A new technique based on cubic spline interpolation with Savitzky–Golay smoothing using weighted least squares error filter is enhanced for scanning electron microscope (SEM) images. A diversity of sample images is captured and the performance is found to be better when compared with the moving average and the standard median filters, with respect to eliminating noise. This technique can be implemented efficiently on real‐time SEM images, with all mandatory data for processing obtained from a single image. Noise in images, and particularly in SEM images, are undesirable. A new noise reduction technique, based on cubic spline interpolation with Savitzky–Golay and weighted least squares error method, is developed. We apply the combined technique to single image signal‐to‐noise ratio estimation and noise reduction for SEM imaging system. This autocorrelation‐based technique requires image details to be correlated over a few pixels, whereas the noise is assumed to be uncorrelated from pixel to pixel. The noise component is derived from the difference between the image autocorrelation at zero offset, and the estimation of the corresponding original autocorrelation. In the few test cases involving different images, the efficiency of the developed noise reduction filter is proved to be significantly better than those obtained from the other methods. Noise can be reduced efficiently with appropriate choice of scan rate from real‐time SEM images, without generating corruption or increasing scanning time.  相似文献   

15.
H. LEI  X. HU  P. ZHU  X. CHANG  Y. ZENG  C. HU  H. LI  X. HU 《Journal of microscopy》2015,260(1):100-106
Three‐dimensional particle tracking in biological systems is a quickly growing field, many techniques have been developed providing tracking characters. Digital in‐line holographic microscopy is a valuable technique for particle tracking. However, the speckle noise, out‐of‐focus signals and twin image influenced the particle tracking. Here an adaptive noise reduction method based on bidimensional ensemble empirical mode decomposition is introduced into digital in‐line holographic microscopy. It can eliminate the speckle noise and background of the hologram adaptively. Combined with the three‐dimensional deconvolution approach in the reconstruction, the particle feature would be identified effectively. Tracking the fixed beads on the cover‐glass with piezoelectric stage through multiple holographic images demonstrate the tracking resolution, which approaches 2 nm in axial direction and 1 nm in transverse direction. This would facilitate the development and use in the biological area such as living cells and single‐molecule approaches.  相似文献   

16.
We intend to improve the image reconstruction for RMS contrast, spatial resolution and signal-to-noise (SNR) parameters for the animal positron emission tomograph IRI—microPET (IRI-Islamic Republic of Iran), designed and built at the Gamma scan laboratory of nuclear science and technology research institute. Acquired images quality from this system depends on different algorithms for image reconstruction in addition to its design and construction. In this paper, system features and tomography method are considered, firstly. Then, image reconstruction algorithms (MLEM, SART, and FBP) were performed on sinogarm. Acquired images quality from these reconstructed algorithms was compared with RMS contrast, spatial resolution and SNR characteristics. Also, reconstructed time and speed of process for three algorithms was considered. According to results, obtained RMS contrast, spatial resolution and signal to noise ratio (SNR) from reconstructed images with MLEM algorithm shows superiority of MLEM algorithm against the SART and FBP algorithms but its computation time is high. Thus, SART algorithm can be suitable replacement for MLEM algorithm.  相似文献   

17.
To enhance unclear microscopy mineral images, an algorithm based on toggle operator using opening and closing is proposed in this paper. Firstly, the specified toggle operator using opening and closing through designing the selection rules is analysed. Secondly, after importing the multiscale theory into the specified toggle operator, useful mineral image features, especially the mineral details, are extracted using the multiscale theory‐based toggle operator. Finally, the mineral image is enhanced through the strategy of enlarging the contrast between the extracted bright and dark image features. Experimental results on different types of mineral images verified that the proposed algorithm could effectively enhance mineral images and performed better than some other algorithms. The enhanced mineral image is clear and contains rich mineral details, whereas the grey scale distribution of the original mineral image is appropriately maintained. This would be useful for the further mineral analysis. Therefore, the proposed algorithm could be widely used for image‐based mineral applications.  相似文献   

18.
Cell counting in microscopic images is one of the fundamental analysis tools in life sciences, but is usually tedious, time consuming and prone to human error. Several programs for automatic cell counting have been developed so far, but most of them demand additional training or data input from the user. Most of them do not allow the users to online monitor the counting results, either. Therefore, we designed two straightforward, simple‐to‐use cell‐counting programs that also allow users to correct the detection results. In this paper, we present the Cellcounter and Learn 123 programs for automatic and semiautomatic counting of objects in fluorescent microscopic images (cells or cell nuclei) with a user‐friendly interface. Although Cellcounter is based on predefined and fine‐tuned set of filters optimized on sets of chosen experiments, Learn 123 uses an evolutionary algorithm to determine the adapt filter parameters based on a learning set of images. Cellcounter also includes an extension for analysis of overlaying images. The efficiency of both programs was assessed on images of cells stained with different fluorescent dyes by comparing automatically obtained results with results that were manually annotated by an expert. With both programs, the correlation between automatic and manual counting was very high (R2 < 0.9), although Cellcounter had some difficulties processing images with no cells or weakly stained cells, where sometimes the background noise was recognized as an object of interest. Nevertheless, the differences between manual and automatic counting were small compared to variations between experimental repeats. Both programs significantly reduced the time required to process the acquired images from hours to minutes. The programs enable consistent, robust, fast and accurate detection of fluorescent objects and can therefore be applied to a range of different applications in different fields of life sciences where fluorescent labelling is used for quantification of various phenomena. Moreover, Cellcounter overlay extension also enables fast analysis of related images that would otherwise require image merging for accurate analysis, whereas Learn 123's evolutionary algorithm can adapt counting parameters to specific sets of images of different experimental settings.  相似文献   

19.
基于L-曲率流滤波器的图像降噪算法   总被引:2,自引:2,他引:0  
提出了L-曲率流滤波器的图像降噪(滤波)算法,该方法按图像信噪比大小分高、中、低3类,分别由L滤波器降噪、多级L滤波器降噪以及多次迭代的组合滤波器降噪,并进行了实验研究。结果表明:该算法与均值和中值滤波器相比,输入图像信噪比越低,滤波效果越明显。当输入图像为低信噪比时,对于受高斯噪声污染的图像,该算法滤波比均值滤波平均提高2.98 dB;对于受脉冲噪声污染的图像,该算法滤波比中值滤波平均提高11.09 dB,说明该算法对降低不同种类和不同信噪比的图像噪声有较强的适应性。  相似文献   

20.
A new technique based on nearest neighbourhood method is proposed. In this paper, considering the noise as Gaussian additive white noise, new technique single‐image‐based estimator is proposed. The performance of this new technique such as adaptive slope nearest neighbourhood is compared with three of the existing method which are original nearest neighbourhood (simple method), first‐order interpolation method and shape‐preserving piecewise cubic hermite autoregressive moving average. In a few cases involving images with different brightness and edges, this adaptive slope nearest neighbourhood is found to deliver an optimum solution for signal‐to‐noise ratio estimation problems. For different values of noise variance, the adaptive slope nearest neighbourhood has highest accuracy and less percentage estimation error. Being more robust with white noise, the new proposed technique estimator has efficiency that is significantly greater than those of the three methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号