首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.

A number of image encryption techniques have been proposed in recent years. These techniques use either spatial or transform domain image processing. A major challenge when designing an image encryption scheme is to conceal the pixel of the input image, especially when the image has a low texture region. Another problem is the encryption computational time. In this paper, these two issues are addressed. As the use of a single substitution box (S-box) to encrypt digital images does not work well for greater as well as a lower number of gray levels. To solve this problem, a new substitution technique using multiple S-boxes with dynamic substitution is proposed. In the second part of this paper, the proposed discrete wavelet transform based scheme is employed to reduce the encryption computational time. A number of parameters like correlation, entropy, energy, contrast, homogeneity, MSE and PSNR are used to analyze the quality of cipher images.

  相似文献   

2.

The purpose of this paper is to propose an algorithm and a novel method for fusion of Passive Millimeter Wave (PMMW) images with their visible images by using Non-subsampled Shearlet Transform (NSST) based on the Spiking Cortical Model (SCM) fusion rule. The parameters of NSST and Improved SCM (ISCM) are selected and optimized based on time consumption and the output of the optimization function like QAB/F. In addition, an effective thresholding method in contourlet transform for PMMW and visible images fusion, which is proposed in our previous paper, is applied to the fusion procedure. Furthermore, NSST is used for analysis of the initial images in different resolutions and directions, followed by using of the ISCM neural network as the fusion rule. Combination of the proposed fusion and thresholding processes resulted in better output images containing visual information close to that of the visible image as well as the hidden object in the millimeter wave image. Moreover, a new evaluation criterion is proposed which does not suffer the shortcomings of the available criteria and improves the previous results for detecting hidden objects by up to 23% with 5% extra time consumption. Finally, the obtained results are evaluated with the available and standard metrics, and all the images exhibited better values than those of the other methods. In fact, the value obtained for the gun image is also acceptable. The results indicate that the proposed fusion method realizes almost all the considered objectives, and the fused image show the objects distinctly in an image with visible details.

  相似文献   

3.
A neuristor of a constant K two-line active circuit proposed by the authors was realised and the dependence of neuristor characteristics on the circuit parameters was obtained. It is known that the neuristor proposed as a micro-logic device has many advantages. However, no functional device which works as the neuristor has been completed. The proposed neuristor models have not been satisfactorily developed as a micro-electronic device because they cannot be integrated due to their structural difficulties, or even if the integration is realized, the relation between the structure and the neuristor characteristics was often obscure.

From the above point of view, a two-line circuit is proposed in this paper which is easily integrated because of its simple structure and is easy to explain theoretically. The neuristor is realized by a lumped-constant K active circuit to obtain the dependence of the neuristor characteristics on the circuit parameters. As a circuit, a series and a parallel type neuristor are used into which an S-shaped and an N-shaped negative resistive element are inserted, respectively.

First, by transmitting pulses the circuit shows that the neuristor characteristics are realized by the constant K circuit. Next the dependence of the neuristor characteristics on the circuit parameters is obtained as a function of the impedance of the circuit element, and finally, to explain theoretically the measured results, the circuit is analysed by Gauss-Seidels method which improves the conventional phase plane analysis.

As a result, it is demonstrated experimentally that the neuristor can be realized using the constant K two-line circuit. The relation between the neuristor characteristics and the circuit parameters is also obtained. Almost all the experimental results are proved theoretically. The difference of the neuristor characteristics between the S-shaped and the N-shaped negative resistance element is demonstrated. Thus now knowledge for the optimum circuit structure in order to realize the neuristor with new negative resistance elements is presented.  相似文献   

4.

Satellite image segmentation has gotten bunches of consideration of late because of the accessibility of commented on high-goals image informational indexes caught by the last age of satellites. The issue of fragmenting a satellite image can be characterized as ordering (or marking) every pixel of the image as indicated by various classes, for example, structures, streets, water, etc. In this paper centered to build up a satellite image segmenting process by utilizing distinctive optimization methods. The work is prepared dependent on three stages that are RGB change, preprocessing, and division. At first the database images are assembled from the database at that point select the blue band images by performing RGB change. To improve the differentiation and furthermore decreasing the commotion of these chose blue band images, Hopfield neural network (HNN) is utilized. After image upgrade, the images are fragmented dependent on fuzzy C means (FCM) clustering method. The images are clustered and segmented in the way of optimizing the centroid in FCM utilizing oppositional crow search algorithm. The exhibition of the proposed framework is investigated dependent on the presentation measurements, for example, affectability, particularity and accuracy. From the outcomes, the proposed strategy diminished the computational time by expanding the accuracy of 98.3% with HNN system.

  相似文献   

5.

The face authentication is a challenging task to validate the user with uncontrolled environment like variations on expression, pose, illumination and occlusion. In order to address these issues, the proposed work provides solution by considering all these factors in inter and intra personal face authentication. During enrollment process, the facial region of still image for the authorized user is detected and features are extracted using local tetra pattern (LTrP) technique. The features are given as input to the neural network namely fuzzy adaptive learning control network (FALCON) for training and classification of features. During authentication process, an image that can vary with expression, pose, illumination and occlusion factors is taken as test image and the test image is applied with LTrP and FALCON to train the features of test image. Then, these trained features are compared with existing feature set by using new proposed multi factor face authentication algorithm to authenticate a person. This work is evaluated among 1150 face images which are collected from JAFFE, Yale, ORL and AR datasets. The overall performance of the work is evaluated by authenticating 1106 images from 1150 constrained images. The second phase of the research work finally produces highest recognition rate of 96% among conventional methods.

  相似文献   

6.
7.
Metrology of devices becoming more and more sophisticated, the collected information is subsequently still increasing. The characteristics of an engineering surface can often be recorded as an image. To compare the characteristics of two different engineering surfaces X and Y tailored with different process parameters, to determine process parameters that have to be controlled to produce a surface with desired properties or to quantify the relevance of a post image treatment for characterising a particular surface property, a practical problem of major interest is therefore to answer the question “are images related to surfaces X and Y similar at all the length scales?”. An original method, based on recent information theory assumptions and on the multi-fractal formalism, is proposed to quantify the degree of similarity of a set of images at all the length scales. The relevance of this method for characterising the morphological textures of surfaces was developed on simulated images generated by means of a 3D fractal function simulating an abrasion process.  相似文献   

8.

It is demonstrated that the efficiency of surface plasmon-polariton excitation at a metal-semiconductor interface by active quantum dots can be determined from measurements of the polarization characteristics of the output radiation. Experimentally, the proposed diagnostic method is based on finding the ratio of the intensities of the output radiation with polarizations orthogonal and parallel to the nanoheterostructure plane for two different distances between the quantum-dot layer and the metal-semiconductor interface. These data are then used to obtain the unknown parameters in the proposed mathematical model which makes it possible to calculate the rate of surface plasmon-polariton excitation by active quantum dots. As a result, this rate can be determined without complicated expensive equipment for fast time-resolved measurements.

  相似文献   

9.

Iris Recognition is gaining popularity in various online and offline authentication and multi-model biometric systems. The non-altering and non-obscuring nature of Iris have increased its reliability in authentication systems. The iris images captured in an uncontrolled environment and situation is the challenging issue of the iris recognition. In this paper, a compression robust and KPCA-Gabor fused model is presented to recognize the iris image accurately under these complexities. The illumination and noise robustness is included in this pre-processing stage for gaining the robustness and reliability against complex capturing. The effective compression features are generated as a phase pre-treatment vector using the Logarithmic quantization method. (Kernel Principal Component Analysis) KPCA and Gabor filters are applied to the rectified image for generating the textural features. The compression is also applied to Gabor and KPCA filtered images. The fuzzy adaptive content level fusion is applied to the compression image, KPCA-Compression, and Gabor-Compression iris-image. (K-Nearest Neighbors) KNN based mapping is used to this composite-fused and reduced feature set to recognize the individual. The proposed compression and fusion-feature based model is applied to CASIA-Iris, UBIRIS, and IITD datasets. The comparative evaluations against earlier approaches identify that the proposed model has improved the recognition accuracy and the reduction in error-rate is also achieved.

  相似文献   

10.

Protection of multimedia information from different types of attackers has become important for people and governments. A high definition image has a large amount of data, and thus, keeping it secret is difficult. Another challenge that security algorithms must face with respect to high definition images in medical and remote sensing applications is pattern appearances, which results from existing regions with high density in the same color, such as background regions. An encryption and hiding based new hybrid image security systems are proposed in this paper for the purpose of keeping high definition images secret. First, one hiding method and two encryption methods are used in two hybrid algorithms. The new hiding algorithm proposed here starts by applying reordering and scrambling operations to the six Most Significant Bit planes of the secret image, and then, it hides them in an unknown scene cover image using adding or subtracting operations. Second, two different ciphering algorithms are used to encrypt the stego-image to obtain two different hybrid image security systems. The first encryption algorithm is based on binary code decomposition, while the second algorithm is a modification of an advanced encryption standard. After evaluating each hybrid algorithm alone, a comparison between the two hybrid systems is introduced to determine the best system. Several parameters were used for the performance, including the visual scene, histogram analysis, entropy, security analysis, and execution time.

  相似文献   

11.
ABSTRACT

This paper presents a new structure of Metal Semiconductor Field Effect Transistor (MESFET) for high power applications. One of the problems that we face in the design of the MESFET devices is that in most cases, the increase of breakdown voltage is accompanied by a decrease in the saturation drain current. Our aim to propose this structure is to improve these two parameters simultaneously. Using the insulator region under the sides of the gate (IR) and the hide field plate (HFP) in the buried oxide (BOX) are the fundamental solution for improving these parameters. We named the proposed structure as spread potential contours towards the drain MESFET (SPC-MESFET). By applying the proposed structure, the drain current and the breakdown voltage improve 20 and 27 percent compared to a conventional structure (C-MESFET), respectively. Therefore, the proposed device has a higher maximum power density than the C-MESFET structure. Also, this idea reduces the gate capacitance and thus the frequency characteristics such as cut off frequency (fT), maximum oscillation frequency (fmax), and Maximum Available Gain (MAG) improve in comparison with the C-MESFET structure.  相似文献   

12.
提出了一种基于图像先验和图像结构特征的盲图像复原算法,在模糊核未知的情况下,采用一系列离散化的模糊核参数对模糊图像进行非盲去卷积,得到一系列对应的复原图像。同时提出一种复原图像判决准则,对这一系列复原图像进行质量判决,从中得到最优的复原图像。最后在实验部分,通过对图像的测试表明,提出的盲图像复原算法能较准确的得到最优复原图像,复原效果在主观和客观标准上均有良好表现。  相似文献   

13.

The problem of image segmentation (division into homogeneous regions) basing on color and texture region differences is considered. A two-level hierarchical pyramidal segmentation algorithm is proposed for solution of this problem. The homogeneity criterion is the estimated adjacency of the image elements and regions in the combined color-texture feature space. A metric in this space is introduced and studied. The results are verified on a set of test images of different types.

  相似文献   

14.

There are many smart applications evolved in the area of the wireless sensor networks. The applications of WSNs are exponentially increasing every year which creates a lot of security challenges that need to be addressed to safeguard the devices in WSN. Due to the dynamic characteristics of these resource constrained devices in WSN, there must be high level security requirements to be considered to create a high secure environments. This paper presents an efficient multi attribute based routing algorithm to provide secure routing of information for WSNs. The work proposed in this paper can decrease the energy and enhances the performance of the network than the currently available routing algorithm such as multi-attribute pheromone ant secure routing algorithm based on reputation value and ant-colony optimization algorithm. The proposed work secures the network environment with the improved detection techniques based on nodes’ higher coincidence rates to find the malicious behavior using trust calculation algorithm. This algorithm uses some QoS parameters such as reliability rate, elapsed time to detect impersonation attacks, and stability rate for trust related attacks, to perform an efficient trust calculation of the nodes in communication. The outcome of the simulation show that the proposed method enhances the performance of the network with the improved detection rate and secure routing service.

  相似文献   

15.

Ultrasound is the most widely used biomedical imaging modality for the purpose of diagnosis. It often comes with speckle that results in reduced quality of images by hiding fine details like edges and boundaries, as well as texture information. In this present study, a novel wavelet thresholding technique for despeckling of ultrasound images is proposed. For analysing performance of the method, it is first tested on synthetic (ground truth) images. Speckle noise with distinct noise levels (0.01–0.04) has been added to the synthetic images in order to examine its efficiency at different noise levels. The proposed technique is applied to various orthogonal and biorthogonal wavelet filters. It has been observed that Daubechies 1 gives the best results out of all wavelet filters. The proposed method is further applied on ultrasound images. Performance of the proposed technique has been validated by comparing it with some state-of-the-art techniques. The results have also been validated visually by the expert. Results reveal that the proposed technique outperforms other state-of-the-art techniques in terms of edge preservation and similarities in structures. Thus, the technique is effective in reducing speckle noise in addition to preserving texture information that can be used for further processing.

  相似文献   

16.
With block-based compression approaches for both still images and sequences of images annoying blocking artifacts are exhibited, primarily at high compression ratios. They are due to the independent processing (quantization) of the block transformed values of the intensity or the displaced frame difference. We propose the application of the hierarchical Bayesian paradigm to the reconstruction of block discrete cosine transform (BDCT) compressed images and the estimation of the required parameters. We derive expressions for the iterative evaluation of these parameters applying the evidence analysis within the hierarchical Bayesian paradigm. The proposed method allows for the combination of parameters estimated at the coder and decoder. The performance of the proposed algorithms is demonstrated experimentally.  相似文献   

17.
Improved algorithms for enhancement of fingerprint images, which have the adaptive normalisation based on block processing, in the automatic fingerprint verification system, are proposed. To obtain an enhanced fingerprint image, first an input image is partitioned into sub-blocks with the size of K×L and the region of interest of the fingerprint image is acquired. The parameters for the image normalisation are then adaptively determined according to the statistics of each block. Utilising these parameters, the block image is normalised for the next process. The proposed algorithms are tested with the NIST fingerprint images and verified to have superb performance  相似文献   

18.
Xu  Datong  Cui  Mingyang  Zhao  Pan 《Mobile Networks and Applications》2022,27(4):1734-1745

In the next-generation wireless communication systems, the local cooperation between different units may be deployed to satisfy communication requirements. In this case, the interference suppression between different units and the receiving performance improvement for each single unit should be considered. Multiple schemes have been utilized to solve these problems. However, these schemes ordinarily require sufficiently accurate information of channel, if this accuracy can not be maintained (e.g., channel estimation error can not be ignored), these schemes may not obtain the satisfactory performances. To overcome this disadvantage, in this paper, a novel scheme is proposed. The proposed scheme has several characteristics: (i) a low-complexity extra estimation is implemented to acquire more information of channel estimation error; (ii) with the help of channel estimation error information, each unit can separately execute a two-step process for interference suppression and receiving performance improvement; (iii) no exorbitant information interaction and high-overhead algorithm are needed between multiple units. Through characteristic analysis and numerical results, it is found the proposed scheme can achieve the satisfactory effect in the locally cooperative network.

  相似文献   

19.

Research on Computer-Aided Diagnosis (CAD) of medical images has been actively conducted to support decisions of radiologists. Since deep learning has shown distinguished abilities in classification, detection, segmentation, etc. in various problems, many studies on CAD have been using deep learning. One of the reasons behind the success of deep learning is the availability of large application-specific annotated datasets. However, it is quite tough work for radiologists to annotate hundreds or thousands of medical images for deep learning, and thus it is difficult to obtain large scale annotated datasets for various organs and diseases. Therefore, many techniques that effectively train deep neural networks have been proposed, and one of the techniques is transfer learning. This paper focuses on transfer learning and especially conducts a case study on ROI-based opacity classification of diffuse lung diseases in chest CT images. The aim of this paper is to clarify what characteristics of the datasets for pre-training and what kinds of structures of deep neural networks for fine-tuning contribute to enhance the effectiveness of transfer learning. In addition, the numbers of training data are set at various values and the effectiveness of transfer learning is evaluated. In the experiments, nine conditions of transfer learning and a method without transfer learning are compared to analyze the appropriate conditions. From the experimental results, it is clarified that the pre-training dataset with more (various) classes and the compact structure for fine-tuning show the best accuracy in this work.

  相似文献   

20.

The existence of a non-cooperative or black hole node as an intermediate node in a mobile network can degrade the performance of the network and affects the trust of neighbor nodes. In this paper, a trust-aware routing protocol is defined for improving the routing reliability against black hole attacks. A new Trust aware and fuzzy regulated AODV (TFAODV) protocol is investigated in this work as an improvement over the existing AODV protocol. The session-driven evaluation of stability, communication-delay, and failure-ratio parameters are conducted for evaluating the trust of nodes. The fuzzy rules apply to these parameters for computing the degree of trust. This trust vector isolates the attack-suspected and trustful nodes. The proposed TFAODV protocol used the trustful mobile nodes as the intermediate path nodes. The proposed protocol has been experimented with in the NS2 simulation environment. The analytical results are obtained in terms of PDR ratio, Packet Communication, Loss rate parameters. The comparative results are derived against the AODV, Probabilistic AODV, PDS-AODV, PSAODV, and Juneja et al. protocols. The analysis is performed on different scenarios varied in terms of network density, degree of stability, and the number of attackers. The simulation results ensured the proposed TFAODV protocol has improved the PDR ratio and reduced the communication loss significantly against these state-of-art protocols.

  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号