首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 26 毫秒
1.
2.
In this article, brightness preserving bi‐level fuzzy histogram equalization (BPFHE) is proposed for the contrast enhancement of MRI brain images. Histogram equalization (HE) is widely used for improving the contrast in digital images. As a result, such image creates side‐effects such as washed‐out appearance and false contouring due to the significant change in brightness. In order to overcome these problems, mean brightness preserving HE based techniques have been proposed. Generally, these methods partition the histogram of the original image into sub histograms and then independently equalize each sub‐histogram. The BPFHE consists of two stages. First, fuzzy histogram is computed based on fuzzy set theory to handle the inexactness of gray level values in a better way compared to classical crisp histograms. In the second stage, the fuzzy histogram is divided into two sub‐histograms based on the mean intensities of the multi‐peaks in the original image and then equalizes them independently to preserve image brightness. The quantitative and subjective enhancement of proposed BPBFHE algorithm is evaluated using two well known parameters like entropy or average information contents (AIC) and Feature Similarity Index Matrix (FSIM) for different gray scale images. The proposed method have been tested using several images and gives better visual quality as compared to the conventional methods. The simulation results show that the proposed method has better performance than the existing methods, and preserve the original brightness quite well, so that it is possible to be utilized in medical image diagnosis.  相似文献   

3.
《成像科学杂志》2013,61(5):447-457
Abstract

Palmprint identification system is one of the most powerful personal identification systems in recent years. In order to achieve high identification accuracy, all parts of the palmprint are needed to be enhanced. Histogram equalisation is a very popular image enhancing technique. A novel histogram equalisation technique, called recursive\ histogram equalisation, for brightness preservation and image contrast enhancement, is put forward in this paper. The essence of proposed algorithm is to decompose an input histogram into two or more sub-histograms recursively based on its mean, change the sub-histograms through a weighting process based on a normalised power law function and then equalise the weighted sub-histograms independently. Experiments show that our method preserves the mean brightness of a given image, enhances the contrast and produces more natural looking images than the other histogram equalisation methods.  相似文献   

4.
Color‐edge detection is an important research task in the field of image processing. Efficient and accurate edge detection will lead to higher performance of subsequent image processing techniques, including image segmentation, object‐based image coding, and image retrieval. To improve the performance of color‐edge detection while considering that human eyes are ultimate receiver of color images, the perceptually insignificant edges should avoid being over‐detected. In this article, a color‐edge detection scheme based on the perceptual color contrast is proposed. The perceptual color contrast is defined as the visible color difference across an edge in the CIE‐Lab color space. A perceptual metric for measuring the visible color difference of a target color pixel is defined by utilizing the associated perceptually indistinguishable region. The perceptually indistinguishable region for each color pixel in the CIE‐Lab color space is estimated by the design of an experiment that considers the local property due to local changes in luminance. Simulation results show that the perceptual color contrast is effectively defined and the color edges in color images are detected while most of the perceptually insignificant edges are successfully suppressed through the proposed color‐edge detection scheme. © 2009 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 19, 332–339, 2009  相似文献   

5.
In this article, we describe a three‐dimensional shape reconstruction system based on the Mach–Zehnder interferometer structure and Young's double pinhole interference principle, while utilizing Fresnel reflection on fiber end face and interference at the fourth port of 3‐dB coupler to achieve closed‐loop precise control of fringe phase. A root‐mean‐square phase stability of 4 mrad is measured with the system. The shape of the object is determined by analyzing the fringe pattern. A new algorithm called rotating rectangular window autoselection is used as the band‐pass filter. The measuring time of the whole system is less than 200 ms, and the error of system is 0.27 mm. Meanwhile, the overall complexity of the measuring algorithm is O(n log n).  相似文献   

6.
In this paper, we describe usage patterns of a design information system, and discuss issues of learning from experience of peers through socially and technologically mediated interactions in a product development community. The study spanned from 1994–1999 and involved a graduate level project‐based course at Stanford University. A web‐based design information system was put in place to enable project teams to store design information, and to access information previously stored by their peer teams from the current year and from prior years. Quantitative analyses of access logs to the system identified patterns of usage, and qualitative interviews identified social issues on such usage. The analyses answer two underlying assumptions associated with such systems to show that with regards to re‐use: 1) more data do not correlate with more usage, and 2) more usage does not necessarily correlate with higher performance, except when over a longer‐term basis. Additionally, patterns of usage show a significantly higher ratio of access to process‐related files than to project‐related files, and show a temporal access pattern that is demand‐driven and closely matches project deliverables and milestones. User interviews identified social factors that significantly influence how teams use the system, such as teaching staff recommending particular information to particular teams.  相似文献   

7.
The intensity‐curvature functional (ICF) of a model polynomial function is defined on a pixel‐by‐pixel basis by the ratio between the intensity‐curvature term before interpolation and the intensity‐curvature term after interpolation. Through the comparison with the traditional high‐pass filter (HPF), this work presents evidence that the ICFs of three model polynomial functions can be tuned as HPFs. The evidence consists of the mathematical characterization of the ICF‐based HPFs, qualitative comparisons in magnetic resonance imaging (MRI) of the human brain, and the determination of the finite impulse response (FIR) of the filters. The ICF‐based HPFs can remove periodic noise in the low‐frequency band.  相似文献   

8.
This paper presents a new class of regularization functions and the associated regularization scheme for structural system identification. In particular, 1‐norm regularization functions are investigated to overcome the smearing effect of 2‐norm regularization functions for the identification of discontinuous system parameters of structures. The truncated singular value decomposition is employed to filter out noise‐polluted solution components and to impose the 1‐norm regularization function on SI. The bilinear fitting method is proposed for selecting an optimal truncation number of the truncated singular value decomposition. The validity of the proposed method is demonstrated through the identification of an inclusion in a square plate and damaged members in a two‐span truss. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

9.
An efficient parallel computing method for high‐speed compressible flows is presented. The numerical analysis of flows with shocks requires very fine computational grids and grid generation requires a great deal of time. In the proposed method, all computational procedures, from the mesh generation to the solution of a system of equations, can be performed seamlessly in parallel in terms of nodes. Local finite‐element mesh is generated robustly around each node, even for severe boundary shapes such as cracks. The algorithm and the data structure of finite‐element calculation are based on nodes, and parallel computing is realized by dividing a system of equations by the row of the global coefficient matrix. The inter‐processor communication is minimized by renumbering the nodal identification number using ParMETIS. The numerical scheme for high‐speed compressible flows is based on the two‐step Taylor–Galerkin method. The proposed method is implemented on distributed memory systems, such as an Alpha PC cluster, and a parallel supercomputer, Hitachi SR8000. The performance of the method is illustrated by the computation of supersonic flows over a forward facing step. The numerical examples show that crisp shocks are effectively computed on multiprocessors at high efficiency. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

10.
A high efficiency and great tunability of bandwidth and absorption‐range electromagnetic wave absorber is proposed without precedent. A series of 2D carbon‐based nanocomposites with the loading of cerium oxide (CN‐Ce) and other types of rare earth oxides (CN‐REOs) can be successfully synthesized by a simple solvothermal‐sintering method. As‐synthesized 2D nanocomposites with local graphite‐like C3N4 structure and trace N‐doped are identified by transmission electron microscopy, X‐ray photoelectron spectroscopy, X‐ray powder diffraction, Fourier transform infrared spectroscopy, and Raman spectroscopy. The CN‐REOs and polyvinylidene fluoride composite absorbers with reflection loss values above ?40 dB are obtained in C‐band, X‐band, and Ku‐band, respectively. The empirical rules on effective bandwidth and frequency range are discovered and summarized, which can be successfully realized by simply tuning the doping amount or type of REO. The mechanism is explained by enhanced attenuation and tunable impedance matching. In addition co‐filled samples by two types of CN‐REOs nanocomposites are prepared to support these findings and inspire the preparation of absorber with desirable frequency band in the range of 2–18 GHz.  相似文献   

11.
The design and analysis of phononic crystals (PnCs) are generally based on the deterministic models without considering the effects of uncertainties. However, uncertainties that existed in PnCs may have a nontrivial impact on their band structure characteristics. In this paper, a sparse point sampling–based Chebyshev polynomial expansion (SPSCPE) method is proposed to estimate the extreme bounds of the band structures of PnCs. In the SPSCPE, the interval model is introduced to handle the unknown‐but‐bounded parameters. Then, the sparse point sampling scheme and the finite element method are used to calculate the coefficients of the Chebyshev polynomial expansion. After that, the SPSCPE method is applied for the band structure analysis of PnCs. Meanwhile, the checkerboard and hinge phenomena are eliminated by the hybrid discretization model. In the end, the genetic algorithm is introduced for the topology optimization of PnCs with unknown‐but‐bounded parameters. The specific frequency constraint is considered. Two numerical examples are investigated to demonstrate the effectiveness of the proposed method.  相似文献   

12.
The advancement in medical imaging systems such as computed tomography (CT), magnetic resonance imaging (MRI), positron emitted tomography (PET), and computed radiography (CR) produces huge amount of volumetric images about various anatomical structure of human body. There exists a need for lossless compression of these images for storage and communication purposes. The major issue in medical image is the sequence of operations to be performed for compression and decompression should not degrade the original quality of the image, it should be compressed loss lessly. In this article, we proposed a lossless method of volumetric medical image compression and decompression using adaptive block‐based encoding technique. The algorithm is tested for different sets of CT color images using Matlab. The Digital Imaging and Communications in Medicine (DICOM) images are compressed using the proposed algorithm and stored as DICOM formatted images. The inverse process of adaptive block‐based algorithm is used to reconstruct the original image information loss lessly from the compressed DICOM files. We present the simulation results for large set of human color CT images to produce a comparative analysis of the proposed methodology with block‐based compression, and JPEG2000 lossless image compression technique. This article finally proves the proposed methodology gives better compression ratio than block‐based coding and computationally better than JPEG 2000 coding. © 2013 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 23, 227–234, 2013  相似文献   

13.
A local level set algorithm for simulating interfacial flows described by the two‐dimensional incompressible Navier–Stokes equations is presented. The governing equations are solved using a finite‐difference discretization on a Cartesian grid and a second‐order approximate projection method. The level set transport and reinitialization equations are solved in a narrow band around the interface using an adaptive refined grid, which is reconstructed every time step and refined using a simple uniform cell‐splitting operation within the band. Instabilities at the border of the narrow band are avoided by smoothing the level set function in the outer part of the band. The influence of different PDE‐based reinitialization strategies on the accuracy of the results is investigated. The ability of the proposed method to accurately compute interfacial flows is discussed using different tests, namely the advection of a circle of fluid in two different time‐reversed vortex flows, the advection of Zalesak's rotating disk, the propagation of small‐amplitude gravity and capillary waves at the interface between two superposed viscous fluids in deep water, and a classical test of Rayleigh–Taylor instability with and without surface tension effects. The interface location error and area loss for some of the results obtained are compared with those of a recent particle level set method. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

14.
This article considers the design of two‐stage reliability test plans. In the first stage, a bogey test was performed, which will allow the user to demonstrate reliability at a high confidence level. If the lots pass the bogey test, the reliability sampling test is applied to the lots in the second stage. The purpose of the proposed sampling plan was to test the mean time to failure of the product as well as the minimum reliability at bogey. Under the assumption that the lifetime distribution follows Weibull distribution and the shape parameter is known, the two‐stage reliability sampling plans with bogey tests are developed and the tables for users are constructed. An illustrative example is given, and the effects of errors in estimates of a Weibull shape parameter are investigated. A comparison of the proposed two‐stage test with corresponding bogey and one‐stage tests was also performed. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

15.
Fatigue failure is a complex phenomenon. Therefore, development of a fatigue damage model that considers all associated complexities resulting from the application of different cyclic loading types, geometries, materials, and environmental conditions is a challenging task. Nevertheless, fatigue damage models such as critical plane‐based models are popular because of their capability to estimate life mostly within ±2 and ±3 factors of life for smooth specimens. In this study, a method is proposed for assessing the fatigue life estimation capability of different critical plane‐based models. In this method, a subroutine was developed and used to search for best estimated life regardless of critical plane assumption. Therefore, different fatigue damage models were evaluated at all possible planes to search for the best life. Smith‐Watson‐Topper (normal strain‐based), Fatemi‐Socie (shear strain‐based), and Jahed‐Varvani (total strain energy density‐based) models are compared by using the proposed assessment method. The assessment is done on smooth specimen level by using the experimental multiaxial fatigue data of 3 alloys, namely, AZ31B and AZ61A extruded magnesium alloys and S460N structural steel alloy. Using the proposed assessment method, it was found that the examined models may not be able to reproduce the experimental lives even if they were evaluated at all physical planes.  相似文献   

16.
A non‐dominance criterion‐based metric that tracks the growth of an archive of non‐dominated solutions over a few generations is proposed to generate a convergence curve for multi‐objective evolutionary algorithms (MOEAs). It was observed that, similar to single‐objective optimization problems, there were significant advances toward the Pareto optimal front in the early phase of evolution while relatively smaller improvements were obtained as the population matured. This convergence curve was used to terminate the MOEA search to obtain a good trade‐off between the computational cost and the quality of the solutions. Two analytical and two crashworthiness optimization problems were used to demonstrate the practical utility of the proposed metric. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

17.
In discrete element method simulations, multi‐sphere particle is extensively employed for modeling the geometry shape of non‐spherical particle. A contact detection algorithm for multi‐sphere particles has been developed through two‐level‐grid‐searching. In the first‐level‐grid‐searching, each multi‐sphere particle is represented by a bounding sphere, and global space is partitioned into identical square or cubic cells of size D, the diameter of the greatest bounding sphere. The bounding spheres are mapped into the cells in global space. The candidate particles can be picked out by searching the bounding spheres in the neighbor cells of the bounding sphere for the target particle. In the second‐level‐grid‐searching, a square or cubic local space of size (D + d) is partitioned into identical cells of size d, the diameter of the greatest element sphere. If two bounding spheres of two multi‐sphere particles are overlapped, the contacts occurring between the element spheres in the target multi‐sphere particle and in the candidate multi‐sphere particle are checked. Theoretical analysis and numerical tests on the memory requirement and contact detection time of this algorithm have been performed to verify the efficiency of this algorithm. The results showed that this algorithm can effectively deal with the contact problem for multi‐sphere particles. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

18.
19.
A unified approach for parameter identification of a visco‐poroplastic material model is presented. A repressing powder forging process is analyzed. The numerical solutions for direct and inverse problems have been described. The inverse problem is solved by the use of gradient‐based methods and a sensitivity analysis. Numerical examples of the method proposed are presented, in which one and two parameters of the visco‐poroplastic material model were identified. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

20.
In this paper, a two‐dimensional displacement‐based meshfree‐enriched FEM (ME‐FEM) is presented for the linear analysis of compressible and near‐incompressible planar elasticity. The ME‐FEM element is established by injecting a first‐order convex meshfree approximation into a low‐order finite element with an additional node. The convex meshfree approximation is constructed using the generalized meshfree approximation method and it possesses the Kronecker‐delta property on the element boundaries. The gradient matrix of ME‐FEM element satisfies the integration constraint for nodal integration and the resultant ME‐FEM formulation is shown to pass the constant stress test for the compressible media. The ME‐FEM interpolation is an element‐wise meshfree interpolation and is proven to be discrete divergence‐free in the incompressible limit. To prevent possible pressure oscillation in the near‐incompressible problems, an area‐weighted strain smoothing scheme incorporated with the divergence‐free ME‐FEM interpolation is introduced to provide the smoothing on strains and pressure. With this smoothed strain field, the discrete equations are derived based on a modified Hu–Washizu variational principle. Several numerical examples are presented to demonstrate the effectiveness of the proposed method for the compressible and near‐incompressible problems. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号