首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 828 毫秒
1.
Medical Resonance Imaging (MRI) is a noninvasive, nonradioactive, and meticulous diagnostic modality capability in the field of medical imaging. However, the efficiency of MR image reconstruction is affected by its bulky image sets and slow process implementation. Therefore, to obtain a high-quality reconstructed image we presented a sparse aware noise removal technique that uses convolution neural network (SANR_CNN) for eliminating noise and improving the MR image reconstruction quality. The proposed noise removal or denoising technique adopts a fast CNN architecture that aids in training larger datasets with improved quality, and SARN algorithm is used for building a dictionary learning technique for denoising large image datasets. The proposed SANR_CNN model also preserves the details and edges in the image during reconstruction. An experiment was conducted to analyze the performance of SANR_CNN in a few existing models in regard with peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and mean squared error (MSE). The proposed SANR_CNN model achieved higher PSNR, SSIM, and MSE efficiency than the other noise removal techniques. The proposed architecture also provides transmission of these denoised medical images through secured IoT architecture.  相似文献   

2.
Modern medical imaging requires storage of large quantities of digitized clinical data. To provide high bandwidth and to reduce the storage space, a medical image must be compressed before transmission. One of the best image compression techniques is using the Haar wavelet transform. The method of discrete cosine transform (DCT) is chosen to be the preprocessing scheme to identify the image frequency information and has excellent energy compaction property. The block coding algorithm uses a wavelet transform to generate the sub band samples, which can be quantized and coded. It is more robust to errors than many other wavelet‐based schemes. In this article, simulations are carried out on different medical Images and it demonstrates the performance in terms of peak signal to noise ratio (PSNR) & bits per pixel (BPP). Our proposed method is found to preserve information fidelity while reducing the amount of data. © 2014 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 24, 175–181, 2014  相似文献   

3.
The need for a general purpose Content Based Image Retrieval (CBIR) system for huge image databases has attracted information-technology researchers and institutions for CBIR techniques development. These techniques include image feature extraction, segmentation, feature mapping, representation, semantics, indexing and storage, image similarity-distance measurement and retrieval making CBIR system development a challenge. Since medical images are large in size running to megabits of data they are compressed to reduce their size for storage and transmission. This paper investigates medical image retrieval problem for compressed images. An improved image classification algorithm for CBIR is proposed. In the proposed method, RAW images are compressed using Haar wavelet. Features are extracted using Gabor filter and Sobel edge detector. The extracted features are classified using Partial Recurrent Neural Network (PRNN). Since training parameters in Neural Network are NP hard, a hybrid Particle Swarm Optimization (PSO) – Cuckoo Search algorithm (CS) is proposed to optimize the learning rate of the neural network.  相似文献   

4.
为克服快速分形图像编码带来的解码图像质量下降问题,提出了一种神经网络与方差混合编码的快速分形图像编码算法.该算法结合图像子块复杂度与方差值的对应关系,根据每个区块的方差值大小选择适当的映射编码方法,即对于方差值相对小的区块采用方差编码以提高编码速度,对于方差值相对大的区块采用神经网络编码以提高编码质量.该算法可以较好地修正传统分形编码中由于自仿射映射结构限制所带来的解码质量偏低的问题,在大幅提高编码速度的同时,很好地保持了图像的编码质量.实验结果表明,该算法对比基本分形编码算法可以加速24倍,解码图像的质量对比方差快速分形编码算法有1.1dB的提高.同时,该算法的硬件实现比较容易,非常贴近实用化.  相似文献   

5.
With the advancement in medical data acquisition and telemedicine systems, image compression has become an important tool for image handling, as the tremendous amount of data generated in medical field needs to be stored and transmitted effectively. Volumetric MRI and CT images comprise a set of image slices that are correlated to each other. The prediction of the pixels in a slice depends not only upon the spatial information of the slice, but also the inter-slice information to achieve compression. This article proposes an inter-slice correlation switched predictor (ICSP) with block adaptive arithmetic encoding (BAAE) technique for 3D medical image data. The proposed ICSP exploits both inter-slice and intra-slice redundancies from the volumetric images efficiently. Novelty of the proposed technique is in selecting the correlation coefficient threshold (Tϒ) for switching of ICSP. Resolution independent gradient edge detector (RIGED) at optimal prediction threshold value is proposed for intra-slice prediction. Use of RIGED, which is modality and resolution independent, brings the novelty and improved performance for 3D prediction of volumetric images. BAAE is employed for encoding of prediction error image to resulting in higher compression efficiency. The proposed technique is also extended for higher bit depth volumetric medical images (16-bit depth) presenting significant compression gain of 3D images. The performance of the proposed technique is compared with the state-of-the art techniques in terms of bits per pixel (BPP) for 8-bit depth and was found to be 31.21%, 27.55%, 21.89%, and 2.39% better than the JPEG-2000, CALIC, JPEG-LS, M-CALIC, and 3D-CALIC respectively. The proposed technique is 11.86%, 8.56%, 7.97%, 6.80%, and 4.86% better than the M-CALIC, 3D CALIC, JPEG-2000, JPEG-LS and CALIC respectively for 16-bit depth image datasets. The average value of compression ratio for 8-bit and 16-bit image dataset is obtained as 3.70 and 3.11 respectively by the proposed technique.  相似文献   

6.
The aim of image compression endeavour is to reduce the total data required to represent the image, which, in turn, decreases the demand of transmission bandwidth and storage space. In this work, we propose an image fusion based idea that can be exploited extensively to reduce the file size of JPEG compressed image further. Before performing the JPEG compression, we compute both intensity and a subsampled colour representation of the image undergoing compression. Then, similar to the JPEG compression, discrete cosine transformation, quantisation and entropy coding processes are applied on these images and stored in a single image file container. In the decoder, these two images are reconstructed and fused to obtain the resultant decoded image. Our experiments show that the proposed method does meet the lower storage and bandwidth requirement by reducing the average bits per pixel of the encoded image than that of the JPEG compressed image.  相似文献   

7.
Most images may not be sharp and clear due to various reasons like noise interference and is said to be in a blurred condition. Image de-blurring is fundamental in making pictures sharp and useful. Normally, along with the input blurred image, Point Spread Function (PSF) of the original image is required for the process of restoration and de-blurring. In this paper, we introduce a technique for image restoration by Richardson–Lucy algorithm where the optimised PSF is generated by the use of Genetic Algorithm (GA). Use of optimised PSF ensures that our proposed technique does not need the original image for the de-blurring purpose and can be greatly beneficial in the real time scenario cases. The dataset used for the evaluation of the proposed technique are real 3D images and the evaluation metrics used are peak signal-to-noise ratio (PSNR), Second-Derivative like Measure of Enhancement (SDME) and mean squared error (MSE). The technique is compared with existing techniques such as de-convolution method, regularisation filter, Wiener filter and Richardson–Lucy algorithm. From the results, we can observe that our proposed technique has achieved higher PSNR and SDME values and lower MSE values when compared with other techniques. We have achieved average PSNR of 70·94, SDME of 71·46 and MSE of 0·0063. The values obtained show the superior performance of the proposed technique.  相似文献   

8.
一种基于小波子带熵的遥感图像压缩算法   总被引:2,自引:0,他引:2  
提出了一种使用小波子带熵进行比特分配的遥感图像压缩算法.对遥感图像进行小波提升分解后,分析了各高频子带能量百分比及其熵的变化趋势,在此基础上提出了一种新的快速比特分配方法-使用子带熵进行比特分配.然后对各个高频子带进行均匀量化,量化后的数据采用比特平面编码.对最高比特平面只记录该比特平面中非零系数的坐标,其它比特平面采用行程编码和Huffman编码方法进行压缩.实验结果表明,纹理复杂以及相对平坦的遥感图像使用该算法压缩后都可以获得很好的重构图像质量,峰值信噪比均大于34dB,而压缩比则与图像的复杂程度有关.  相似文献   

9.
Variable least significant bits (VLSB) steganography is a pretty powerful and secure technique for data hiding in cover images, having variable data hiding capacity, signal-to-noise ratio, peak signal-to-noise ratio, and mean square error (MSE). This study presents a new algorithm for the implementation of VLSB steganography named varying index varying bits substitution (VIVBS). The VIVBS algorithm is a very secure, high capacity, flexible, and statistically unpredictable mechanism to conceal information in cover images. The method uses a secret stego-key comprising a reference point, and variation of the number of bits to be hidden with varying indices of pixels in the cover image. The secret key adds an extra feature of security to steganography, making it much immune to steganalysis. The VIVBS algorithm is capable of providing variable data hiding capacity and variable key size which can be changed by changing the range of least significant bits used. A data hiding capacity of 43.75% with a negligible MSE 14.67 dB has been achieved using the VIVBS algorithm. For larger data hiding capacity, the MSE and distortion increases significantly which make the existence of information predictable but the key size also increases significantly, making the retrieval of hidden information difficult for the unauthorized person.  相似文献   

10.
Image compression is a process based on reducing the redundancy of the image to be stored or transmitted in an efficient form. In this work, a new idea is proposed, where we take advantage of the redundancy that appears in a group of images to be all compressed together, instead of compressing each image by itself. In our proposed technique, a classification process is applied, where the set of the input images are classified into groups based on existing technique like L1 and L2 norms, color histograms. All images that belong to the same group are compressed based on dividing the images of the same group into sub-images of equal sizes and saving the references into a codebook. In the process of extracting the different sub-images, we used the mean squared error for comparison and three blurring methods (simple, middle and majority blurring) to increase the compression ratio. Experiments show that varying blurring values, as well as MSE thresholds, enhanced the compression results in a group of images compared to JPEG and PNG compressors.  相似文献   

11.
ABSTRACT

This paper proposes a new and efficient codec called 3D Light Detection and Ranging (LiDAR) point cloud coding based on tensor (LPCT) concepts. By combining the techniques of Statistical Subspace Outlier Detection and Logarithmic Transformation, LPCT effectively makes the unreliable points imperceptible and diminishes the spatial coefficient ranges. LPCT is applied to achieve the perfect encoding and decoding performances by using tensor. The iterative compression method is introduced to immensely reduce the dimension of a higher-order point cloud data. Experimental results reveal that the proposed LPCT yields a better independent compression ratio (CR) and impressive quality of a decompressed image than the existing well-liked compression approaches, namely 7-Zip and WinRAR. This work proves that the proposed lossless LPCT algorithm compresses the spatial information of various size point cloud images into six bytes and produces better Hausdorff peak signal-to-noise ratio (PSNR) for the shortest distance point cloud image.  相似文献   

12.
Medical images are known for their huge volume which becomes a real problem for their archiving or transmission notably for telemedicine applications. In this context, we present a new method for medical image compression which combines image definition resizing and JPEG compression. We baptise this new protocol REPro.JPEG (reduction/expansion protocol combined with JPEG compression). At first, the image is reduced then compressed before its archiving or transmission. At last, the user or the receiver decompresses the image then enlarges it before its display. The obtain results prove that, at the same number of bits per pixel lower than 0.42, that REPRo.JPEG guarantees a better preservation of image quality compared to the JPEG compression for dermatological medical images. Besides, applying the REPRo.JPEG on these colour medical images is more efficient while using the HSV colour space compared to the use of RGB or YCbCr colour spaces.  相似文献   

13.
The advancement in medical imaging systems such as computed tomography (CT), magnetic resonance imaging (MRI), positron emitted tomography (PET), and computed radiography (CR) produces huge amount of volumetric images about various anatomical structure of human body. There exists a need for lossless compression of these images for storage and communication purposes. The major issue in medical image is the sequence of operations to be performed for compression and decompression should not degrade the original quality of the image, it should be compressed loss lessly. In this article, we proposed a lossless method of volumetric medical image compression and decompression using adaptive block‐based encoding technique. The algorithm is tested for different sets of CT color images using Matlab. The Digital Imaging and Communications in Medicine (DICOM) images are compressed using the proposed algorithm and stored as DICOM formatted images. The inverse process of adaptive block‐based algorithm is used to reconstruct the original image information loss lessly from the compressed DICOM files. We present the simulation results for large set of human color CT images to produce a comparative analysis of the proposed methodology with block‐based compression, and JPEG2000 lossless image compression technique. This article finally proves the proposed methodology gives better compression ratio than block‐based coding and computationally better than JPEG 2000 coding. © 2013 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 23, 227–234, 2013  相似文献   

14.
Image inpainting is the technique of filling-in the missing regions and removing unwanted objects from an image by diffusing the pixel information from the neighbourhood pixels. Image inpainting techniques are in use over a long time for various applications like removal of scratches, restoring damaged/missing portions or removal of objects from the images, etc. In this study, we present a simple, yet unexplored (digital) image inpainting technique using median filter, one of the most popular nonlinear (order statistics) filters. The median is maximum likelihood estimate of location for the Laplacian distribution. Hence, the proposed algorithm diffuses median value of pixels from the exterior area into the inner area to be inpainted. The median filter preserves the edge which is an important property needed to inpaint edges. This technique is stable. Experimental results show remarkable improvements and works for homogeneous as well as heterogeneous background. PSNR (quantitative assessment) is used to compare inpainting results.  相似文献   

15.
The lossy nature of the JPEG compression leaves traces which are utilized by the forensic agents to identify the local tampering in the image. In this paper, a tricky anti-forensic method has been proposed to remove the traces left by the JPEG compression in both the spatial domain and discrete cosine transform domain. A novel Least Cuckoo Search algorithm is devised in the proposed anti-forensic compression scheme. Moreover, a new fitness function called histogram deviation is formulated in the optimization algorithm. The experimentation of the proposed anti-forensic compression scheme is performed over uncompressed images from UCID database. The performance of the proposed method is evaluated, and it is compared with the existing methods using PSNR, MSE and classification accuracy as measures. The experimentation ensued with promising results, i.e. accuracy of 0.97, PSNR of 44.34?dB, and MSE of 0.1789 which prove the efficacy of the proposed method.  相似文献   

16.
Underwater images degraded due to low contrast and visibility issues. Therefore, it is important to enhance the images and videos taken in the underwater environment before processing. Enhancement is a way to improve or increase image quality and to improve the contrast of degraded images. The original image or video which is captured through image processing devices needs to improve as there are various issues such as less light available, low resolution, and blurriness in underwater images caused by the normal camera. Various researchers have proposed different solutions to overcome these problems. Dark channel prior (DCP) is one of the most used techniques which produced a better Peak Signal to Noise Ratio (PSNR) value. However, DCP has some issues such as it tends to darken images, reduce contrast, and produce halo effects. The proposed method solves these issues with the help of contrast-limited adaptive histogram equalization (CLAHE) and the Adaptive Color Correction Method. The proposed method was assessed using Japan Agency for Marine-Earth Science and Technology (JAMSTEC), and some images were collected from the internet. The measure of entropy (MOE), Measure of Enhancement (EME), Mean Square Error (MSE), and PSNR opted as performance measures during experiments. The values of MSE and PSNR achieved by the proposed framework are 0.26 and 32 respectively which shows better results.  相似文献   

17.
In recent years, it has been evident that internet is the most effective means of transmitting information in the form of documents, photographs, or videos around the world. The purpose of an image compression method is to encode a picture with fewer bits while retaining the decompressed image’s visual quality. During transmission, this massive data necessitates a lot of channel space. In order to overcome this problem, an effective visual compression approach is required to resize this large amount of data. This work is based on lossy image compression and is offered for static color images. The quantization procedure determines the compressed data quality characteristics. The images are converted from RGB to International Commission on Illumination CIE La*b*; and YCbCr color spaces before being used. In the transform domain, the color planes are encoded using the proposed quantization matrix. To improve the efficiency and quality of the compressed image, the standard quantization matrix is updated with the respective image block. We used seven discrete orthogonal transforms, including five variations of the Complex Hadamard Transform, Discrete Fourier Transform and Discrete Cosine Transform, as well as thresholding, quantization, de-quantization and inverse discrete orthogonal transforms with CIE La*b*; and YCbCr to RGB conversion. Peak to signal noise ratio, signal to noise ratio, picture similarity index and compression ratio are all used to assess the quality of compressed images. With the relevant transforms, the image size and bits per pixel are also explored. Using the (n, n) block of transform, adaptive scanning is used to acquire the best feasible compression ratio. Because of these characteristics, multimedia systems and services have a wide range of possible applications.  相似文献   

18.
High-quality medical microscopic images used for diseases detection are expensive and difficult to store. Therefore, low-resolution images are favorable due to their low storage space and ease of sharing, where the images can be enlarged when needed using Super-Resolution (SR) techniques. However, it is important to maintain the shape and size of the medical images while enlarging them. One of the problems facing SR is that the performance of medical image diagnosis is very poor due to the deterioration of the reconstructed image resolution. Consequently, this paper suggests a multi-SR and classification framework based on Generative Adversarial Network (GAN) to generate high-resolution images with higher quality and finer details to reduce blurring. The proposed framework comprises five GAN models: Enhanced SR Generative Adversarial Networks (ESRGAN), Enhanced deep SR GAN (EDSRGAN), Sub-Pixel-GAN, SRGAN, and Efficient Wider Activation-B GAN (WDSR-b-GAN). To train the proposed models, we have employed images from the famous BreakHis dataset and enlarged them by 4× and 16× upscale factors with the ground truth of the size of 256 × 256 × 3. Moreover, several evaluation metrics like Peak Signal-to-Noise Ratio (PSNR), Mean Squared Error (MSE), Structural Similarity Index (SSIM), Multiscale Structural Similarity Index (MS-SSIM), and histogram are applied to make comprehensive and objective comparisons to determine the best methods in terms of efficiency, training time, and storage space. The obtained results reveal the superiority of the proposed models over traditional and benchmark models in terms of color and texture restoration and detection by achieving an accuracy of 99.7433%.  相似文献   

19.
Image segmentation is vital when analyzing medical images, especially magnetic resonance (MR) images of the brain. Recently, several image segmentation techniques based on multilevel thresholding have been proposed for medical image segmentation; however, the algorithms become trapped in local minima and have low convergence speeds, particularly as the number of threshold levels increases. Consequently, in this paper, we develop a new multilevel thresholding image segmentation technique based on the jellyfish search algorithm (JSA) (an optimizer). We modify the JSA to prevent descents into local minima, and we accelerate convergence toward optimal solutions. The improvement is achieved by applying two novel strategies: Ranking-based updating and an adaptive method. Ranking-based updating is used to replace undesirable solutions with other solutions generated by a novel updating scheme that improves the qualities of the removed solutions. We develop a new adaptive strategy to exploit the ability of the JSA to find a best-so-far solution; we allow a small amount of exploration to avoid descents into local minima. The two strategies are integrated with the JSA to produce an improved JSA (IJSA) that optimally thresholds brain MR images. To compare the performances of the IJSA and JSA, seven brain MR images were segmented at threshold levels of 3, 4, 5, 6, 7, 8, 10, 15, 20, 25, and 30. IJSA was compared with several other recent image segmentation algorithms, including the improved and standard marine predator algorithms, the modified salp and standard salp swarm algorithms, the equilibrium optimizer, and the standard JSA in terms of fitness, the Structured Similarity Index Metric (SSIM), the peak signal-to-noise ratio (PSNR), the standard deviation (SD), and the Features Similarity Index Metric (FSIM). The experimental outcomes and the Wilcoxon rank-sum test demonstrate the superiority of the proposed algorithm in terms of the FSIM, the PSNR, the objective values, and the SD; in terms of the SSIM, IJSA was competitive with the others.  相似文献   

20.
A high-definition television (HDTV) video compression encoder is being constructed for use during the standardization process for United States terrestrial broadcast HDTV. The encoder generates an MPEG-2 main profile/high level compliant bitstream at compressed data rates from 10–80 million bits/second. Both interlaced and progressive image formats in image sizes up to 1080 lines × 1920 pixels per line are supported.©1994 John Wiley & Sons Inc  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号