首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Hao  Haidong  Tang  Zhen  Lu  Hongda  Cheng  Lifeng  Lv  Xin 《Microsystem Technologies》2017,23(7):2759-2766
Microsystem Technologies - This work is intended to describe the design aspects and to characterize the functionality of a novel flip-chip (FC) structure applicable for THz camera assembling. The...  相似文献   

2.
Biometric authentication has a great potential to improve the security, reduce cost, and enhance the customer convenience of payment systems. Despite these benefits, biometric authentication has not yet been adopted by large-scale point-of-sale and automated teller machine systems. This paper aims at providing a better understanding of the benefits and limitations associated with the integration of biometrics in a PIN-based payment authentication system. Based on a review of the market drivers and deployment hurdles, a method is proposed in which biometrics can be seamlessly integrated in a PIN-based authentication infrastructure. By binding a fixed binary, renewable string to a noisy biometric sample, the data privacy and interoperability between issuing and acquiring banks can improve considerably compared to conventional biometric approaches. The biometric system security, cost aspects, and customer convenience are subsequently compared to PIN by means of simulations using fingerprints. The results indicate that the biometric authentication performance is not negatively influenced by the incorporation of key binding and release processes, and that the security expressed as guessing entropy of the biometric key is virtually identical to the current PIN. The data also suggest that for the fingerprint database under test, the claimed benefits for cost reduction, improved security and customer convenience do not convincingly materialize when compared to PIN. This result can in part explain why large-scale biometric payment systems are virtually non-existent in Europe and the United States, and suggests that other biometric modalities than fingerprints may be more appropriate for payment systems.  相似文献   

3.
In this paper, we present a generic circuit model of microbolometer Infrared detector that can be used to simulate the electrical and thermal performance of microbolometers using SPICE like circuit simulator. Using this model, we have studied the effects of various parameters on the microbolometer performance by simulations in PSPICE for its verification. We have validated the model with the performance of our titanium microbolometers being developed at our laboratory. We have tuned the model parameters for these microbolometers and have shown that the simulated performance agrees with the measured performance reasonably well with the variety of measurement condition. The validated model has been used to fine tune the design of our titanium microbolometers as it allows us to monitor some internal parameters also that are not easy to measure in practical devices, like instantaneous temperature of microbolometer for a varying IR intensity falling on it. The proposed model is generic and therefore, using similar procedure it may be used for other types of microbolometers also like amorphous-Si based, vanadium oxide based etc., once it is validated for that.  相似文献   

4.
低对比度复杂背景下的小目标检测一直是研究的热点和难点,检测的困难主要在于背景噪声的复杂和目标的微弱.分析和研究了形态膨胀算法均值漂移(Mean Shift)算法:形态膨胀算法对目标进行有效增强,而均值漂移算法改善目标与背景对比度,有利于有效分割目标.最后实现了基于该方法的两种不同情景下的小目标的检测,实验表明该算法具有较好的有效性和鲁棒性.而且,该方法在最终目标选取采用了自适应阈值方法.实验分析表明:算法基本上是定点运算,效率较高,易于实时硬件实现.  相似文献   

5.
A model-based fault detection filter is developed for structural health monitoring of a simply supported beam. The structural damage represented in the plant model is shown to decompose into a known fault direction vector maintaining a fixed direction, dependent on the damage location, and an arbitrary fault magnitude representing the extent of the damage. According to detection filter theory, if damage occurs, under certain circumstances the fault will be uniquely detected and identified through an associated invariance in the direction imposed on the fault detection filter residuals. The spectral algorithm used to design the detection filter is based on a left eigenstructure assignment approach which accommodates system sensitivities that are revealed as ill-conditioned matrices formed from the eigenvectors in the construction of the detection filter gains. The detection filter is applied to data from an aluminum simply supported beam with four piezoelectric sensors and one piezoelectric actuator. By exciting the structure at the first natural frequency, damage in the form of a 5 mm saw cut made to one side of the beam is detected and localized.  相似文献   

6.
Applied Intelligence - Air toxicity and pollution phenomena are on the rise across the planet. Thus, the detection and control of gas pollution are nowadays major economic and environmental...  相似文献   

7.
This paper presents a novel approach to automatic detection of the erythemato-squamous diseases based on fuzzy extreme learning machine (FELM). Enormous computational efforts are required to classify these erythemato-squamous diseases. Some of the approaches performed previously are through fuzzy logic, artificial neural networks and neuro-fuzzy models. FELM-based differential diagnosis of these diseases involves decisions made by fuzzy logic and extreme learning machine (ELM) with greater efficiency in both time and accuracy. In this paper, we develop a user-friendly interface and this tool will be useful for a dermatologist to estimate the six types of erythemato-squamous diseases with the help of patient’s histopathological and clinical data. Then, the developed interface is derived inbuilt using neural networks, adaptive neuro-fuzzy inference system and FELM. A dataset containing records of 366 patients with 34 features that define six disease characteristics was taken, of which 310 records were used as training data and 56 other records used as testing data. The dataset was preprocessed to obtain fuzzy values as input to get more accurate results in FELM. Given a training set of such records, ELM approach is applied. By combining fuzzy logic and ELM, more accurate results with increased performance are obtained with less computational efforts. Finally, the proposed FELM model proves to be a potential solution for the diagnosis of erythemato-squamous diseases with significant improvement in computational time and accuracy compared with other models discussed in the recent literature.  相似文献   

8.
Automatic fall detection is a major issue in the health care of elderly people. In this task the ability to discriminate in real time between falls and normal daily activities is crucial. Several methods already exist to perform this task, but approaches able to provide explicit formalized knowledge and high classification accuracy have not yet been developed and would be highly desirable. To achieve this aim, this paper proposes an innovative and complete approach to fall detection based both on the automatic extraction of knowledge expressed as a set of IF-THEN rules from a database of fall recordings, and on its use in a mobile health monitoring system. Whenever a fall is detected by this latter, the system can take immediate actions, e.g. alerting medical personnel. Our method can easily overcome the limitations of other approaches to fall detection. In fact, thanks to the knowledge gathering, it overcomes both the difficulty faced by a human being dealing with many parameters and trying to find out which are the most suitable, and also the need to apply a laborious trial-and-error procedure to find the values of the related thresholds. In addition, in our approach the extracted knowledge is processed in real time by a reasoner embedded in a mobile device, without any need for connection to a remote server. This proposed approach has been compared against four other classifiers on a database of falls simulated by volunteers, and its discrimination ability has been shown to be higher with an average accuracy of 91.88%. We have also carried out a very preliminary experimental phase. The best set of rules found by using the previous database has allowed us to achieve satisfactory performance in these experiments as well. Namely, on these real-world falls the obtained results in terms of accuracy, sensitivity, and specificity are of about 92%, 86%, and 96%, respectively.  相似文献   

9.
Designing a Kalman filter with a constraint on the H norm of the estimation error was first developed by Bernstein and Haddad in 1989. The main result is a sufficient condition for characterizing the Kalman filter. In this paper, similar to the standard Kalman filter, the properties of orthogonal principles are also shown to be preserved. Furthermore, the uniqueness, as opposed to an H filter, of the filter is implied by the orthogonal principles. An innovative approach to obtaining the minimum energy with a constraint on the H norm of the estimation error is proposed since the original work of Bernstein and Haddad does not, in general, reach the minimum energy of the estimation error. By means of the Secant method, the energy of the estimation error can be reduced as much as possible, under the condition that the H error bound is still satisfied.  相似文献   

10.
A method to obtain accurate integrated properties according to the theory of “Atoms in Molecules” for any atom is proposed. Classical integration algorithms using explicit representations of the interatomic surfaces (IAS) bounding the integrated atom suffer from the presence of regions where the charge density is extremely flat. This phenomenon is typically caused by ring critical points and leads to unacceptable integration errors. The present paper extends a previously published integration algorithm (Mol. Phys. 87 (1996) 1169) by introducing a procedure that can find an atomic boundary if the interatomic surface is not explicitly known. This hybrid algorithm — which uses analytical interatomic surfaces whenever they are available and adequate but does not necessarily require them — enables the accurate and efficient integration of any atom. A robust and effective code is implemented in MORPHY97 and applied to two representative examples.  相似文献   

11.
Computational methods used in microscopy cell image analysis have largely augmented the impact of imaging techniques, becoming fundamental for biological research. The understanding of cell regulation processes is very important in biology, and in particular confocal fluorescence imaging plays a relevant role for the in vivo observation of cells. However, most biology researchers still analyze cells by visual inspection alone, which is time consuming and prone to induce subjective bias. This makes automatic cell image analysis essential for large scale, objective studies of cells. While the classic approach for automatic cell analysis is to use image segmentation, for in vivo confocal fluorescence microscopy images of plants, such approach is neither trivial nor is it robust to image quality variations. To analyze plant cells in in vivo confocal fluorescence microscopy images with robustness and increased performance, we propose the use of local convergence filters (LCF). These filters are based in gradient convergence and as such can handle illumination variations, noise and low contrast. We apply a range of existing convergence filters for cell nuclei analysis of the Arabidopsis thaliana plant root tip. To further increase contrast invariance, we present an augmentation to local convergence approaches based on image phase information. Through the use of convergence index filters we improved the results for cell nuclei detection and shape estimation when compared with baseline approaches. Using phase congruency information we were able to further increase performance by 11% for nuclei detection accuracy and 4% for shape adaptation. Shape regularization was also applied, but with no significant gain, which indicates shape estimation was good for the applied filters.  相似文献   

12.
The performance improvements that can be achieved by classifier selection and by integrating terrain attributes into land cover classification are investigated in the context of rock glacier detection. While exposed glacier ice can easily be mapped from multispectral remote-sensing data, the detection of rock glaciers and debris-covered glaciers is a challenge for multispectral remote sensing. Motivated by the successful use of digital terrain analysis in rock glacier distribution models, the predictive performance of a combination of terrain attributes derived from SRTM (Shuttle Radar Topography Mission) digital elevation models and Landsat ETM+ data for detecting rock glaciers in the San Juan Mountains, Colorado, USA, is assessed. Eleven statistical and machine-learning techniques are compared in a benchmarking exercise, including logistic regression, generalized additive models (GAM), linear discriminant techniques, the support vector machine, and bootstrap-aggregated tree-based classifiers such as random forests. Penalized linear discriminant analysis (PLDA) yields mapping results that are significantly better than all other classifiers, achieving a median false-positive rate (mFPR, estimated by cross-validation) of 8.2% at a sensitivity of 70%, i.e. when 70% of all true rock glacier points are detected. The GAM and standard linear discriminant analysis were second best (mFPR: 8.8%), followed by polyclass. For comparison, the predictive performance of the best three techniques is also evaluated using (1) only terrain attributes as predictors (mFPR: 13.1-14.5% for best three techniques), and (2), only Landsat ETM+ data (mFPR: 19.4-22.7%), yielding significantly higher mFPR estimates at a 70% sensitivity. The mFPR of the worst three classifiers was by about one-quarter higher compared to the best three classifiers, and the combination of terrain attributes and multispectral data reduced the mFPR by more than one-half compared to remote sensing only. These results highlight the importance of combining remote-sensing and terrain data for mapping rock glaciers and other debris-covered ice and choosing the optimal classifier based on unbiased error estimators. The proposed benchmarking methodology is more generally suitable for comparing the utility of remote-sensing algorithms and sensors.  相似文献   

13.
《Control Engineering Practice》2002,10(10):1141-1146
In this paper a mutual comparison of two approaches—linear and nonlinear—to engine misfires detection working with engine roughness measurements is presented. The submitted linear approach relies upon Kalman filter autoregressive models applied to measured roughness signals, while the nonlinear approach is based on extended Kalman filter and radial basis function neural networks used for modelling of the signals nonlinear dynamics. The results achieved with both approaches at engine roughness experimental data sets, reflect their ability to correctly detect the engine misfires. However, the results of the nonlinear approach seem to be better compared with the results of the linear one.  相似文献   

14.
Correlation has been used extensively in object detection field. In this paper, two kinds of correlation filters, minimum average correlation energy (MACE) and extended maximum average correlation height (EMACH), are applied as adaptive shift locators to detect and locate smudgy character strings in complex tabular color flight coupon images. These strings in irregular tabular coupon are computer-printed characters but of low contrast and could be shifted out of the table so that we cannot detect and locate them using traditional algorithms. In our experiment, strings are extracted in the preprocessing phase by removing background and then based on geometric information, two correlation filters are applied to locate expected fields. We compare results from two correlation filters and demonstrate that this algorithm is a high accurate approach.  相似文献   

15.
Predictive trace analysis (PTA), a static trace analysis technique for concurrent programs, can offer powerful capability support for finding concurrency errors unseen in a previous program execution. Existing PTA techniques always face considerable challenges in scaling to large traces which contain numerous critical events. One main reason is that an analyzed trace includes not only redundant memory accessing events and threads that cannot contribute to discovering any additional errors different from the found candidate ones, but also many residual synchronization events which still affect PTA to check whether these candidate ones are feasible or not even after removing the redundant events. Removing them from the trace can significantly improve the scalability of PTA without affecting the quality of the PTA results. In this paper, we propose a biphasic trace filter approach, BIFER in short, to filter these redundant events and residual events for improving the scalability of PTA to expose general concurrency errors. In addition, we design a model which indicates the lock history and the happens-before history of each thread with two kinds of ways to achieve the efficient filtering. We implement a prototypical tool BIFER for Java programs on the basis of a predictive trace analysis framework. Experiments show that BIFER can improve the scalability of PTA during the process of analyzing all of the traces.  相似文献   

16.
‘Beyond their instrumental functions', writes Rivka Oxman in an article about design, creativity and innovation (2013), ‘advanced digital and computational environments are also becoming tools for thinking design’. At the leading edge of creativity and innovation design does not only speculate the plausible, possible or potential, but pragmatically inserts such futures into the present (as Whitehead says any ‘immediate existence’ (1962) must). Using concepts mainly from Deleuze, Guattari, Spinoza and Simondon, I will position such design speculation as pragmatic, divergent, complex and emergent. That is, as manifesting the technical mentalities (Simondon) that provide the milieu in which we can show what we ‘might be capable of’ (Stengers).  相似文献   

17.
In this work, a model-based procedure exploiting analytical redundancy for the detection and isolation of faults on a gas turbine simulated process is presented. The main point of the paper consists of exploiting an identification scheme in connection with dynamic observer or filter design procedures for diagnostic purposes. Thus, black-box modelling and output estimation approaches to fault diagnosis are in particular advantageous in terms of solution complexity and performance achieved. Moreover, the suggested scheme is especially useful when robust solutions are considered for minimising the effects of modelling errors and noise, while maximising fault sensitivity. In order to experimentally verify the robustness of the solution obtained, the proposed FDI strategy has been applied to the simulation data of a single-shaft industrial gas turbine plant in the presence of measurement and modelling errors. Hence, extensive simulations of the test-bed process and Monte Carlo analysis are the tools for assessing experimentally the capabilities of the developed FDI scheme, when compared also with different data-driven diagnosis methods.  相似文献   

18.
Multimedia Tools and Applications - Automatic pterygium detection is an essential screening tool for health community service groups. It allows non-expert to perform screening process without the...  相似文献   

19.

Blind zone of a phase frequency detector (PFD) enhances the phase noise in a Charge Pump PLL. This paper presents a novel technique to reduce the blind zone which reduces the reference spur as well. In this proposed work a variable delay element is incorporated in the Reset path of the PFD. The overall PFD delay is maintained at a small positive value to avoid blind zone at lower phase noise. The performance analysis carried out in Cadence design environment and compared with the performance of a PFD using a fixed delay element in its reset path. The comparison shows that the phase noise is improved by 6 dB.

  相似文献   

20.
We develop and validate an automated approach to determine canopy height, an important metric for global biomass assessments, from micro-pulse photon-counting lidar data collected over forested ecosystems. Such a lidar system is planned to be launched aboard the National Aeronautics and Space Administration’s follow-on Ice, Cloud and land Elevation Satellite mission (ICESat-2) in 2017. For algorithm development purposes in preparation for the mission, the ICESat-2 project team produced simulated ICESat-2 data sets from airborne observations of a commercial micro-pulse lidar instrument (developed by Sigma Space Corporation) over two forests in the eastern USA. The technique derived in this article is based on a multi-step mathematical and statistical signal extraction process which is applied to the simulated ICESat-2 data set. First, ground and canopy surfaces are approximately extracted using the statistical information derived from the histogram of elevations for accumulated photons in 100 footprints. Second, a signal probability metric is generated to help identify the location of ground, canopy-top, and volume-scattered photons. According to the signal probability metric, the ground surface is recovered by locating the lowermost high-photon density clusters in each simulated ICESat-2 footprint. Thereafter, canopy surface is retrieved by finding the elevation at which the 95th percentile of the above-ground photons exists. The remaining noise is reduced by cubic spline interpolation in an iterative manner. We validate the results of the analysis against the full-resolution airborne photon-counting lidar data, digital terrain models (DTMs), and canopy height models (CHMs) for the study areas. With ground surface residuals ranging from 0.2 to 0.5 m and canopy height residuals ranging from 1.6 to 2.2 m, our results indicate that the algorithm performs very well over forested ecosystems of canopy closure of as much as 80%. Given the method’s success in the challenging case of canopy height determination, it is readily applicable to retrieval of land ice and sea ice surfaces from micro-pulse lidar altimeter data. These results will advance data processing and analysis methods to help maximize the ability of the ICESat-2 mission to meet its science objectives.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号