首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
在分析Gabor小波的基础上,提出了一种变采样率Gabor小波的方法,与传统的Gabor小波相比,其识别效果得到大幅提高。该方法采用Curvelet、Log-Gabor小波和Contourlet三种方法结合主分量分析应用于人脸识别。对比实验结果表明,针对表情变化,Curvelet变换不仅识别性能最佳、速度也最快;而针对光照变化,Contourlet综合性能最好,对光照变化具有较强的鲁棒性。综合而言,使用Contourlet变换对图像进行特征提取效果非常好,它能很好地表达人脸的主要信息,是对人脸图像的一种稀疏的、有效的表达。  相似文献   

2.
A comparative study of partitioning methods for crowd simulations   总被引:1,自引:0,他引:1  
The simulation of large crowds of autonomous agents with realistic behavior is still a challenge for several computer research communities. In order to handle large crowds, some scalable architectures have been proposed. Nevertheless, the effective use of distributed systems requires the use of partitioning methods that can properly distribute the workload generated by agents among the existing distributed resources.In this paper, we analyze the use of irregular shape regions (convex hulls) for solving the partitioning problem. We have compared a partitioning method based on convex hulls with two techniques that use rectangular regions. The performance evaluation results show that the convex hull method outperforms the rest of the considered methods in terms of both fitness function values and execution times, regardless of the movement pattern followed by the agents. These results show that the shape of the regions in the partition can improve the performance of the partitioning method, rather than the heuristic method used.  相似文献   

3.
This paper is based on the premise that legal reasoning involves an evaluation of facts, principles, and legal precedent that are inexact, and uncertainty-based methods represent a useful approach for modeling this type of reasoning. By applying three different uncertainty-based methods to the same legal reasoning problem, a comparative study can be constructed. The application involves modeling legal reasoning for the assessment of potential liability due to defective product design. The three methods used for this study include: a Bayesian belief network, a fuzzy logic system, and an artificial neural network. A common knowledge base is used to implement the three solutions and provide an unbiased framework for evaluation. The problem framework and the construction of the common knowledgebase are described. The theoretical background for Bayesian belief networks, fuzzy logic inference, and multilayer perceptron with backpropagation are discussed. The design, implementation, and results with each of these systems are provided. The fuzzy logic system outperformed the other systems by reproducing the opinion of a skilled attorney in 99 of 100 cases, but the fuzzy logic system required more effort to construct the rulebase. The neural network method also reproduced the expert's opinions very well, but required less effort to develop. ©1999 John Wiley & Sons, Inc.  相似文献   

4.
Three methods for the formulation of the kinematic equations of robots with rigid links are presented in this paper. The first and most common method in the robotics community is based on 4x4 homogeneous matrix transformation, the second one is based on Lie algebra, and the third one on screw theory expressed via dual quaternions algebra. These three methods are compared in this paper for their use in the kinematic analysis of robot arms. The basic theory and the transformation operators, upon which every method is based, are referenced. Three analytic algorithms are presented for the solution of the direct kinematic problem corresponding to each method, and the geometric significance of the transformation operators and parameters is explained. Finally, a comparative study on the computation and storage requirements for the three methods is worked out.  相似文献   

5.
The role of information resource dictionary systems (data dictionary systems) is important in two important phases of information resource management:First, information requirements analysis and specification, which is a complex activity requiring data dictionary support: the end result is the specification of an “Enterprise Model,” which embodies the major activities, processes, information flows, organizational constraints, and concepts. This role is examined in detail after analyzing the existing approaches to requirements analysis and specification.Second, information modeling which uses the information in the Enterprise Model to construct a formal implementation independent database specification: several information models and support tools that may aid in transforming the initial requirements into the final logical database design are examined.The metadata — knowledge about both data and processes — contained in the data dictionary can be used to provide views of data for the specialized tools that make up the database design workbench. The role of data dictionary systems in the integration of tools is discussed.  相似文献   

6.
A S Nicholson 《Ergonomics》1989,32(9):1125-1144
There has been much effort in recent years to quantify manual handling capabilities. Four main techniques have been used to this end; biomechanical modelling; the measurement of intra-abdominal pressure; psychophysics; and metabolic/physiological criteria. The aim of this study was to compare quantitatively the data produced from the first three techniques. The comparisons were limited to bimanual, sagittal plane lifting, which of all manual handling activities has been studied the most comprehensively, except that pushing and pulling data were compared from the psychophysics and intra-abdominal pressure ('force limits') databases. It was found that the data from 'force limits' proposed weights for bimanual lifting in the sagittal plane which [corrected] are lower than those reported to be psychophysically acceptable except for lifting close to and around the shoulder. The closest agreement between the databases was for lifting from an origin above knuckle height. The 'force limits' data were found to propose weights of lift which are at a minimum when lifting with a freestyle posture from the floor whereas the psychophysical technique proposes weights which are at a maximum when lifting from the floor. The psychophysical data were found to generate compressive forces at L5/S1 according to a static sagittal plane biomechanical model about 10% in excess of the NIOSH action limit (NIOSH 1981) when lifting from the floor, although over other lifting ranges the compressive forces were less than the NIOSH action limit. Lifting the 'force limits' weights generated compressive forces which were on average 55% less than AL (range 45 to 60%) when lifting in an erect posture. The data for pushing according to the psychophysical and 'force limits' database showed good agreement, but for pulling the 'force limits' weights were considerably greater than those selected psychophysically. The implications of these findings are discussed.  相似文献   

7.
In this paper we investigate the performance of probability estimation methods for reliability analysis. The probability estimation methods typically construct the probability density function (PDF) of a system response using estimated statistical moments, and then perform reliability analysis based on the approximate PDF. In recent years, a number of probability estimation methods have been proposed, such as the Pearson system, saddlepoint approximation, Maximum Entropy Principle (MEP), and Johnson system. However, no general guideline to suggest a most appropriate probability estimation method has yet been proposed. In this study, we carry out a comparative study of the four probability estimation methods so as to derive the general guidelines. Several comparison metrics are proposed to quantify the accuracy in the PDF approximation, cumulative density function (CDF) approximation and tail probability estimations (or reliability analysis). This comparative study gives an insightful guidance for selecting the most appropriate probability estimation method for reliability analysis. The four probability estimation methods are extensively tested with one mathematical and two engineering examples, each of which considers eight different combinations of the system response characteristics in terms of response boundness, skewness, and kurtosis.  相似文献   

8.
A wide variety of uncertainty propagation methods exist in literature; however, there is a lack of good understanding of their relative merits. In this paper, a comparative study on the performances of several representative uncertainty propagation methods, including a few newly developed methods that have received growing attention, is performed. The full factorial numerical integration, the univariate dimension reduction method, and the polynomial chaos expansion method are implemented and applied to several test problems. They are tested under different settings of the performance nonlinearity, distribution types of input random variables, and the magnitude of input uncertainty. The performances of those methods are compared in moment estimation, tail probability calculation, and the probability density function construction, corresponding to a wide variety of scenarios of design under uncertainty, such as robust design, and reliability-based design optimization. The insights gained are expected to direct designers for choosing the most applicable uncertainty propagation technique in design under uncertainty.  相似文献   

9.
《Computers & chemistry》1988,12(4):293-299
It is demonstrated that almost all the methods applied to computing equilibrium concentrations in ideal systems of a given number of phases can be derived via applying Newton's iteration formula to the system of equations describing equilibrium and mass-balance. In practical realization either concentrations or their logarithms are treated as unknowns; this results in two mathematically different types of algorithms. Proper transformations of the inverses of the Jacobians give various numerically different versions. Most of these may be identified with well-known literature methods, some of which result from applying Newton's treatment and some resulting from free-energy minimization (RAND). An important property of the algorithms which use concentrations as unknowns is that, regardless of the initial approximations, the mass-balance equations should hold from the second iteration. For algorithms which use logarithms of concentrations, the equilibrium equations should hold. However, in making use of these properties and thus omitting unnecessary terms the result can be perturbed solutions because of round-off error. The efficiency and stability of the algorthims under consideration is discussed on the bases of numerical examples.  相似文献   

10.
11.
“Secure Device Pairing” or “Secure First Connect” is the process of bootstrapping a secure channel between two previously unassociated devices over some (usually wireless) human-imperceptible communication channel. Absence of prior security context and common trust infrastructure open the door for the so-called Man-in-the-Middle and Evil Twin attacks. Mitigation of these attacks requires some level of user involvement in the device pairing process. Prior research yielded a number of technically sound methods relying on various auxiliary human-perceptible out-of-band channels, e.g., visual, acoustic and tactile. Such methods engage the user in authenticating information exchanged over the human-imperceptible channel, thus defending against MiTM attacks and forming the basis for secure pairing.This paper reports on a comprehensive and comparative evaluation of notable secure device pairing methods. This evaluation was obtained via a thorough analysis of these methods, in terms of both security and usability. The results help us identify methods best-suited for specific combinations of devices and human abilities. This work is an important step in understanding usability in one of the rare settings where a very wide range of users (not just specialists) are confronted with modern security technology.  相似文献   

12.
A typical procedure for designing multivariable controllers is the following: build a model for the multivariable process, choose the control structure, calculate the control parameters, test the controller (possibly with simulation) and then retune controller parameters as necessary. This procedure is complex and time consuming even for scalar control loops. For multivariable controllers, the procedure is even more daunting. Automation of the design method is and has been a concern of many researchers. There has been a large number of papers on relay autotuning of control systems. The choice of relay feedback to solve the design problem is justified by the possible integration of system identification and control into the same design strategy, giving birth to relay autotuning. In this paper, nine different relay autotuning methods for multivariable systems are compared. Most of these methods have common basics but they may differ in the tuning procedure, convergence, identification method, control structure and performance achievement. The paper summarizes these methods and investigates the advantages and drawback of each algorithm.  相似文献   

13.
This paper presents five Artificial Intelligence (AI) methods to predict the final duration of a project. A methodology that involves Monte Carlo simulation, Principal Component Analysis and cross-validation is proposed and can be applied by academics and practitioners. The performance of the AI methods is assessed by means of a large and topologically diverse dataset and is benchmarked against the best performing Earned Value Management/Earned Schedule (EVM/ES) methods. The results show that the AI methods outperform the EVM/ES methods if the training and test sets are at least similar to one another. Additionally, the AI methods report excellent early and mid-stage forecasting results. A robustness experiment gradually increases the discrepancy between the training and test sets and demonstrates the limitations of the newly proposed AI methods.  相似文献   

14.
This paper describes the preparation, recording, analyzing, and evaluation of a new speech corpus for Modern Standard Arabic (MSA). The speech corpus contains a total of 415 sentences recorded by 40 (20 male and 20 female) Arabic native speakers from 11 different Arab countries representing three major regions (Levant, Gulf, and Africa). Three hundred and sixty seven sentences are considered as phonetically rich and balanced, which are used for training Arabic Automatic Speech Recognition (ASR) systems. The rich characteristic is in the sense that it must contain all phonemes of Arabic language, whereas the balanced characteristic is in the sense that it must preserve the phonetic distribution of Arabic language. The remaining 48 sentences are created for testing purposes, which are mostly foreign to the training sentences and there are hardly any similarities in words. In order to evaluate the speech corpus, Arabic ASR systems were developed using the Carnegie Mellon University (CMU) Sphinx 3 tools at both training and testing/decoding levels. The speech engine uses 3-emitting state Hidden Markov Models (HMM) for tri-phone based acoustic models. Based on experimental analysis of about 8?h of training speech data, the acoustic model is best using continuous observation’s probability model of 16 Gaussian mixture distributions and the state distributions were tied to 500 senones. The language model contains uni-grams, bi-grams, and tri-grams. For same speakers with different sentences, Arabic ASR systems obtained average Word Error Rate (WER) of 9.70%. For different speakers with same sentences, Arabic ASR systems obtained average WER of 4.58%, whereas for different speakers with different sentences, Arabic ASR systems obtained average WER of 12.39%.  相似文献   

15.
SIGNAL is a part of the synchronous languages family, which are broadly used in the design of safety-critical real-time systems such as avionics, space systems, and nu- clear power plants. There exist several semantics for SIG- NAL, such as denotational semantics based on traces (called trace semantics), denotational semantics based on tags (called tagged model semantics), operational semantics presented by structural style through an inductive definition of the set of possible transitions, operational semantics defined by syn- chronous transition systems (STS), etc. However, there is lit- tle research about the equivalence between these semantics. In this work, we would like to prove the equivalence be- tween the trace semantics and the tagged model semantics, to get a determined and precise semantics of the SIGNAL language. These two semantics have several different defini- tions respectively, we select appropriate ones and mechanize them in the Coq platform, the Coq expressions of the abstract syntax of SIGNAL and the two semantics domains, i.e., the trace model and the tagged model, are also given. The dis- tance between these two semantics discourages a direct proof of equivalence. Instead, we transform them to an intermediate model, which mixes the features of both the trace semantics and the tagged model semantics. Finally, we get a determined and precise semantics of SIGNAL.  相似文献   

16.
SIGNAL is a part of the synchronous languages family, which are broadly used in the design of safety-critical real-time systems such as avionics, space systems, and nuclear power plants. There exist several semantics for SIGNAL, such as denotational semantics based on traces (called trace semantics), denotational semantics based on tags (called tagged model semantics), operational semantics presented by structural style through an inductive definition of the set of possible transitions, operational semantics defined by synchronous transition systems (STS), etc. However, there is little research about the equivalence between these semantics. In this work, we would like to prove the equivalence between the trace semantics and the tagged model semantics, to get a determined and precise semantics of the SIGNAL language. These two semantics have several different definitions respectively, we select appropriate ones and mechanize them in the Coq platform, the Coq expressions of the abstract syntax of SIGNAL and the two semantics domains, i.e., the trace model and the tagged model, are also given. The distance between these two semantics discourages a direct proof of equivalence. Instead, we transformthem to an intermediate model, which mixes the features of both the trace semantics and the tagged model semantics. Finally, we get a determined and precise semantics of SIGNAL.  相似文献   

17.
Multimedia Tools and Applications - An elementary visual unit – the viseme is concerned in the paper in the context of preparing the feature vector as a main visual input component of...  相似文献   

18.
A comparative study on similarity-based fuzzy reasoning methods   总被引:9,自引:0,他引:9  
If the given fact for an antecedent in a fuzzy production rule (FPR) does not match exactly with the antecedent of the rule, the consequent can still be drawn by technique such as fuzzy reasoning. Many existing fuzzy reasoning methods are based on Zadeh's Compositional Rule of Inference (CRI) which requires setting up a fuzzy relation between the antecedent and the consequent part. There are some other fuzzy reasoning methods which do not use Zadeh's CRI. Among them, the similarity-based fuzzy reasoning methods, which make use of the degree of similarity between a given fact and the antecedent of the rule to draw the conclusion, are well known. In this paper, six similarity-based fuzzy reasoning methods are compared and analyzed. Two of them are newly proposed by the authors. The comparisons are two-fold. One is to compare the six reasoning methods in drawing appropriate conclusions for a given set of FPRs. The other is to compare them based on five issues: 1) types of FPR handled by these methods; 2) the complexity of the methods; 3) the accuracy of the conclusion drawn; 4) the accuracy of the similarity measure; and 5) the multi-level reasoning capability. The results have shed some lights on how to select an appropriate fuzzy reasoning method under different environments.  相似文献   

19.
《Ergonomics》2012,55(9):1125-1144
There has been much effort in recent years to quantify manual handling capabilities. Four main techniques have been used to this end; biomechanical modelling; the measurement of intra-abdominal pressure; psychophysics; and metabolic/physiological criteria. The aim of this study was to compare quantitatively the data produced from the first three techniques. The comparisons were limited to bimanual, sagittal plane lifting, which of all manual handling activities has been studied the most comprehensively, except that pushing and pulling data were compared from the psychophysics and intra-abdominal pressure (‘force limits’) databases. It was found that the data from ‘force limits’ proposed weights for bimanual lifting in the sagittal plane are lower than those reported to be psychophysically acceptable except for lifting close to and around the shoulder. The closest agreement between the databases was for lifting from an origin above knuckle height. The ‘force limits’ data were found to propose weights of lift which are at a minimum when lifting with a freestyle posture from the floor whereas the psychophysical technique proposes weights which are at a maximum when lifting from the floor. The psychophysical data were found to generate compressive forces at L5/S1 according to a static sagittal plane biomechanical model about 10% in excess of the NIOSH action limit (NIOSH 1981) when lifting from the floor, although over other lifting ranges the compressive forces were less than the NIOSH action limit. Lifting the (force limits) weights generated compressive forces which were on average 55% less than the AL (range 45 to 60%) when lifting in an erect posture. The data for pushing according to the psychophysical and ‘force limits’ database showed good agreement, but for pulling the ‘force limits’ weights were considerably greater than those selected psych ophysically. The implications of these findings are discussed.  相似文献   

20.
Active appearance models (AAMs) have been widely used in many face modeling and facial feature extraction methods. One of the problems of AAMs is that it is difficult to model a sufficiently wide range of human facial appearances, the pattern of intensities across a face image patch. Previous researches have used principal component analysis (PCA) for facial appearance modeling, but there has been little analysis and comparison between PCA and many other facial appearance modeling methods such as non-negative matrix factorization (NMF), local NMF (LNMF), and non-smooth NMF (ns-NMF). The main contribution of this paper is to find a suitable facial appearance modeling method for AAMs by a comparative study. In the experiments, PCA, NMF, LNMF, and ns-NMF were used to produce the appearance model of the AAMs and the root mean square (RMS) errors of the detected feature points were analyzed using the AR and BERC face databases. Experimental results showed that (1) if the appearance variations of testing face images were relatively non-sparser than those of training face images, the non-sparse methods (PCA, NMF) based AAMs outperformed the sparse methods (nsNMF, LNMF) based AAMs. (2) If the appearance variations of testing face images are relatively sparser than those of training face images, the sparse methods (nsNMF) based AAMs outperformed the non-sparse methods (PCA, NMF) based AAMs.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号