首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper a generalized framework for face verification is proposed employing discriminant techniques in all phases of elastic graph matching. The proposed algorithm is called discriminant elastic graph matching (DEGM) algorithm. In the first step of the proposed method, DEGM, discriminant techniques at the node feature vectors are used for feature selection. In the sequel, the two local similarity values, i.e., the similarity measure for the projected node feature vector and the node deformation, are combined in a discriminant manner in order to form the new local similarity measure. Moreover, the new local similarity values at the nodes of the elastic graph are weighted by coefficients that are derived as well from discriminant analysis in order to form a total similarity measure between faces. The proposed method exploits the individuality of the human face and the discriminant information of elastic graph matching in order to improve the verification performance of elastic graph matching. We have applied the proposed scheme to a modified morphological elastic graph matching algorithm. All experiments have been conducted in the XM2VTS database resulting in very low error rates for the test sets.  相似文献   

2.
This paper presents a novel framework for effective video semantic analysis. This framework has two major components, namely, optical flow tensor (OFT) and hidden Markov models (HMMs). OFT and HMMs are employed because: (1) motion is one of the fundamental characteristics reflecting the semantic information in video, so an OFT-based feature extraction method is developed to make full use of the motion information. Thereafter, to preserve the structure and discriminative information presented by OFT, general tensor discriminant analysis (GTDA) is used for dimensionality reduction. Finally, linear discriminant analysis (LDA) is utilized to further reduce the feature dimension for discriminative motion information representation; and (2) video is a sort of information intensive sequential media characterized by its context-sensitive nature, so the video sequences can be more effectively analyzed by some temporal modeling tools. In this framework, we use HMMs to well model different levels of semantic units (SU), e.g., shot and event. Experimental results are reported to demonstrate the advantages of the proposed framework upon semantic analysis of basketball video sequences, and the cross validations illustrate its feasibility and effectiveness.  相似文献   

3.
Application of the sustainability concept to environmental projects implies that at least three feature categories (i.e., economic, social, and environmental) must be taken into account by applying a participative multi-criterion analysis (MCA). However, MCA results depend crucially on the methodology applied to estimate the relative criterion weights. By using a logically consistent set of data and methods (i.e., linear regression [LR], factor analysis [FA], the revised Simos procedure [RSP], and the analytical hierarchy process [AHP]), the present study revealed that mistakes from using one weight-estimation method rather than an alternative are non-significant in terms of satisfaction of specified acceptable standards (i.e., a risk of up to 1% of erroneously rejecting an option), but significant for comparisons between options (i.e., a risk of up to 11% of choosing a worse option by rejecting a better option). In particular, the risks of these mistakes are larger if both differences in statistical or computational algorithms and in data sets are involved (e.g., LR vs. AHP). In addition, the present study revealed that the choice of weight-estimation methods should depend on the estimated and normalised score differences for the economic, social, and environmental features. However, on average, some pairs of weight-estimation methods are more similar (e.g., AHP vs. RSP and LR vs. AHP are the most and the least similar, respectively), and some single weight-estimation methods are more reliable (i.e., FA > RSP > AHP > LR).  相似文献   

4.
Business processes leave trails in a variety of data sources (e.g., audit trails, databases, and transaction logs). Hence, every process instance can be described by a trace, i.e., a sequence of events. Process mining techniques are able to extract knowledge from such traces and provide a welcome extension to the repertoire of business process analysis techniques. Recently, process mining techniques have been adopted in various commercial BPM systems (e.g., BPM|one, Futura Reflect, ARIS PPM, Fujitsu Interstage, Businesscape, Iontas PDF, and QPR PA). Unfortunately, traditional process discovery algorithms have problems dealing with less structured processes. The resulting models are difficult to comprehend or even misleading. Therefore, we propose a new approach based on trace alignment. The goal is to align traces in such a way that event logs can be explored easily. Trace alignment can be used to explore the process in the early stages of analysis and to answer specific questions in later stages of analysis. Hence, it complements existing process mining techniques focusing on discovery and conformance checking. The proposed techniques have been implemented as plugins in the ProM framework. We report the results of trace alignment on one synthetic and two real-life event logs, and show that trace alignment has significant promise in process diagnostic efforts.  相似文献   

5.
Let f be a univariate polynomial with real coefficients, fR[X]. Subdivision algorithms based on algebraic techniques (e.g., Sturm or Descartes methods) are widely used for isolating the real roots of f in a given interval. In this paper, we consider a simple subdivision algorithm whose primitives are purely numerical (e.g., function evaluation). The complexity of this algorithm is adaptive because the algorithm makes decisions based on local data. The complexity analysis of adaptive algorithms (and this algorithm in particular) is a new challenge for computer science. In this paper, we compute the size of the subdivision tree for the SqFreeEVAL algorithm.The SqFreeEVAL algorithm is an evaluation-based numerical algorithm which is well-known in several communities. The algorithm itself is simple, but prior attempts to compute its complexity have proven to be quite technical and have yielded sub-optimal results. Our main result is a simple O(d(L+lnd)) bound on the size of the subdivision tree for the SqFreeEVAL algorithm on the benchmark problem of isolating all real roots of an integer polynomial f of degree d and whose coefficients can be written with at most L bits.Our proof uses two amortization-based techniques: first, we use the algebraic amortization technique of the standard Mahler-Davenport root bounds to interpret the integral in terms of d and L. Second, we use a continuous amortization technique based on an integral to bound the size of the subdivision tree. This paper is the first to use the novel analysis technique of continuous amortization to derive state of the art complexity bounds.  相似文献   

6.
This paper evaluates the statistical methodologies of cluster analysis, discriminant analysis, and Logit analysis used in the examination of intrusion detection data. The research is based on a sample of 1200 random observations for 42 variables of the KDD-99 database, that contains ‘normal’ and ‘bad’ connections. The results indicate that Logit analysis is more effective than cluster or discriminant analysis in intrusion detection. Specifically, according to the Kappa statistic that makes full use of all the information contained in a confusion matrix, Logit analysis (K = 0.629) has been ranked first, with second discriminant analysis (K = 0.583), and third cluster analysis (K = 0.460).  相似文献   

7.
8.
Linear discriminant analysis (LDA) is one of the most popular supervised dimensionality reduction (DR) techniques and obtains discriminant projections by maximizing the ratio of average-case between-class scatter to average-case within-class scatter. Two recent discriminant analysis algorithms (DAS), minimal distance maximization (MDM) and worst-case LDA (WLDA), get projections by optimizing worst-case scatters. In this paper, we develop a new LDA framework called LDA with worst between-class separation and average within-class compactness (WSAC) by maximizing the ratio of worst-case between-class scatter to average-case within-class scatter. This can be achieved by relaxing the trace ratio optimization to a distance metric learning problem. Comparative experiments demonstrate its effectiveness. In addition, DA counterparts using the local geometry of data and the kernel trick can likewise be embedded into our framework and be solved in the same way.  相似文献   

9.
This paper investigates the use of statistical dimensionality reduction (DR) techniques for discriminative low dimensional embedding to enable affective movement recognition. Human movements are defined by a collection of sequential observations (time-series features) representing body joint angle or joint Cartesian trajectories. In this work, these sequential observations are modelled as temporal functions using B-spline basis function expansion, and dimensionality reduction techniques are adapted to enable application to the functional observations. The DR techniques adapted here are: Fischer discriminant analysis (FDA), supervised principal component analysis (PCA), and Isomap. These functional DR techniques along with functional PCA are applied on affective human movement datasets and their performance is evaluated using leave-one-out cross validation with a one-nearest neighbour classifier in the corresponding low-dimensional subspaces. The results show that functional supervised PCA outperforms the other DR techniques examined in terms of classification accuracy and time resource requirements.  相似文献   

10.
A potentiometric electronic tongue with 36 cross-sensibility lipo/polymeric membranes was built and applied for semi-quantitative and quantitative analysis of non-alcoholic beverages. A total of 16 commercial fruit juices (e.g., orange, pineapple, mango and peach) from five different brands were studied. In the semi-quantitative approach, the signal profiles recorded by the device were used together with a stepwise linear discriminant analysis to differentiate four beverage groups with different fruit juice contents: >30%, 14-30%, 6-10% and <4%. The model, with two discriminant functions based on the signals of only four polymeric membranes, explained 99% of the total variability of experimental data and was able to classify the studied samples into the correct group with an overall sensibility and specificity of 100% for the original data, and greater than 93% for the cross-validation procedure.The signals were also used to obtain MLR and PLS calibration models to estimate and predict the concentrations of fructose and glucose in the soft drinks. The linear models established were based on the signals recorded by 16 polymeric membranes and were able to estimate and predict satisfactorily (cross-validation) the concentrations of the two sugars (R2 greater than 0.96 and 0.84, respectively).  相似文献   

11.
This paper investigates the commonly overlooked “sensitivity” of sensitivity analysis (SA) to what we refer to as parameter “perturbation scale”, which can be defined as a prescribed size of the sensitivity-related neighbourhood around any point in the parameter space (analogous to step size Δx for numerical estimation of derivatives). We discuss that perturbation scale is inherent to any (local and global) SA approach, and explain how derivative-based SA approaches (e.g., method of Morris) focus on small-scale perturbations, while variance-based approaches (e.g., method of Sobol) focus on large-scale perturbations. We employ a novel variogram-based approach, called Variogram Analysis of Response Surfaces (VARS), which bridges derivative- and variance-based approaches. Our analyses with different real-world environmental models demonstrate significant implications of subjectivity in the perturbation-scale choice and the need for strategies to address these implications. It is further shown how VARS can uniquely characterize the perturbation-scale dependency and generate sensitivity measures that encompass all sensitivity-related information across the full spectrum of perturbation scales.  相似文献   

12.
Spatial variation of land-surface properties is a major challenge to ecological and biogeochemical studies in the Amazon basin. The scale dependence of biophysical variation (e.g., mixtures of vegetation cover types), as depicted in Landsat observations, was assessed for the common land-cover types bordering the Tapajós National Forest, Central Brazilian Amazon. We first collected hyperspectral signatures of vegetation and soils contributing to the optical reflectance of landscapes in a 600-km2 region. We then employed a spectral mixture model AutoMCU that utilizes bundles of the field spectra with Monte Carlo analysis to estimate sub-pixel cover of green plants, senescent vegetation and soils in Landsat Thematic Mapper (TM) pixels. The method proved useful for quantifying biophysical variability within and between individual land parcels (e.g., across different pasture conditions). Image textural analysis was then performed to assess surface variability at the inter-pixel scale. We compared the results from the textural analysis (inter-pixel scale) to spectral mixture analysis (sub-pixel scale). We tested the hypothesis that very high resolution, sub-pixel estimates of surface constituents are needed to detect important differences in the biophysical structure of deforested lands. Across a range of deforestation categories common to the region, there was strong correlation between the fractional green and senescent vegetation cover values derived from spectral unmixing and texture analysis variance results (r2>0.85, p<0.05). These results support the argument that, in deforested areas, biophysical heterogeneity at the scale of individual field plots (sub-pixel) is similar to that of whole clearings when viewed from the Landsat vantage point.  相似文献   

13.
Binary discriminant functions are often used to identify changed area through time in remote sensing change detection studies. Traditionally, a single change-enhanced image has been used to optimize the binary discriminant function with a few (e.g., 5-10) discrete thresholds using a trial-and-error method. Im et al. [Im, J., Rhee, J., Jensen, J. R., & Hodgson, M. E. (2007). An automated binary change detection model using a calibration approach. Remote Sensing of Environment, 106, 89-105] developed an automated calibration model for optimizing the binary discriminant function by autonomously testing thousands of thresholds. However, the automated model may be time-consuming especially when multiple change-enhanced images are used as inputs together since the model is based on an exhaustive search technique. This paper describes the development of a computationally efficient search technique for identifying optimum threshold(s) in a remote sensing spectral search space. The new algorithm is based on “systematic searching.” Two additional heuristic optimization algorithms (i.e., hill climbing, simulated annealing) were examined for comparison. A case study using QuickBird and IKONOS satellite imagery was performed to evaluate the effectiveness of the proposed algorithm. The proposed systematic search technique reduced the processing time required to identify the optimum binary discriminate function without decreasing accuracy. The other two optimizing search algorithms also reduced the processing time but failed to detect a global maxima for some spectral features.  相似文献   

14.
Computer-based simulations are increasingly being used in educational assessment. In most cases, the simulation-based assessment (SBA) is used for formative assessment, which can be defined as assessment for learning, but as research on the topic continues to grow, possibilities for summative assessment, which can be defined as assessment of learning, are also emerging. The current study contributes to research on the latter category of assessment. In this article, we present a methodology for scoring the interactive and complex behavior of students in a specific type of SBA, namely, a Multimedia-based Performance Assessment (MBPA), which is used for a summative assessment purpose. The MBPA is used to assess the knowledge, skills, and abilities of confined space guard (CSG) students. A CSG supervises operations that are carried out in a confined space (e.g., a tank or silo). We address two specific challenges in this article: the evidence identification challenge (i.e., scoring interactive task performance), and the evidence accumulation challenge (i.e., accumulating scores in a psychometric model). Using expert ratings on the essence and difficulty of actions in the MBPA, we answer the first challenge by demonstrating that interactive task performance in MBPA can be scored. Furthermore, we answer the second challenge by recoding the expert ratings in conditional probability tables that can be used in a Bayesian Network (a psychometric model for reasoning under uncertainty and complexity). Finally, we validate and illustrate the presented methodology through the analysis of the response data of 57 confined space guard students who performed in the MBPA.  相似文献   

15.
ABC analysis is a popular and effective method used to classify inventory items into specific categories that can be managed and controlled separately. Conventional ABC analysis classifies inventory items three categories: A, B, or C based on annual dollar usage of an inventory item. Multi-criteria inventory classification has been proposed by a number of researchers in order to take other important criteria into consideration. These researchers have compared artificial-intelligence (AI)-based classification techniques with traditional multiple discriminant analysis (MDA). Examples of these AI-based techniques include support vector machines (SVMs), backpropagation networks (BPNs), and the k-nearest neighbor (k-NN) algorithm. To test the effectiveness of these techniques, classification results based on four benchmark techniques are compared. The results show that AI-based techniques demonstrate superior accuracy to MDA. Statistical analysis reveals that SVM enables more accurate classification than other AI-based techniques. This finding suggests the possibility of implementing AI-based techniques for multi-criteria ABC analysis in enterprise resource planning (ERP) systems.  相似文献   

16.
Fisher linear discriminant analysis (FLDA) finds a set of optimal discriminating vectors by maximizing Fisher criterion, i.e., the ratio of the between scatter to the within scatter. One of its major disadvantages is that the number of its discriminating vectors capable to be found is bounded from above by C-1 for C-class problem. In this paper for binary-class problem, we propose alternative FLDA to breakthrough this limitation by only replacing the original between scatter with a new scatter measure. The experimental results show that our approach give impressive recognition performances compared to both the Fisher approach and linear SVM.  相似文献   

17.
Several methods to select variables that are subsequently used in discriminant analysis are proposed and analysed. The aim is to find from among a set of m variables a smaller subset which enables an efficient classification of cases. Reducing dimensionality has some advantages such as reducing the costs of data acquisition, better understanding of the final classification model, and an increase in the efficiency and efficacy of the model itself. The specific problem consists in finding, for a small integer value of p, the size p subset of original variables that yields the greatest percentage of hits in the discriminant analysis. To solve this problem a series of techniques based on metaheuristic strategies is proposed. After performing some test it is found that they obtain significantly better results than the stepwise, backward or forward methods used by classic statistical packages. The way these methods work is illustrated with several examples.  相似文献   

18.
This paper studies the problem of matching two unsynchronized video sequences of the same dynamic scene, recorded by different stationary uncalibrated video cameras. The matching is done both in time and in space, where the spatial matching can be modeled by a homography (for 2D scenarios) or by a fundamental matrix (for 3D scenarios). Our approach is based on matching space-time trajectories of moving objects, in contrast to matching interest points (e.g., corners), as done in regular feature-based image-to-image matching techniques. The sequences are matched in space and time by enforcing consistent matching of all points along corresponding space-time trajectories. By exploiting the dynamic properties of these space-time trajectories, we obtain sub-frame temporal correspondence (synchronization) between the two video sequences. Furthermore, using trajectories rather than feature-points significantly reduces the combinatorial complexity of the spatial point-matching problem when the search space is large. This benefit allows for matching information across sensors in situations which are extremely difficult when only image-to-image matching is used, including: (a) matching under large scale (zoom) differences, (b) very wide base-line matching, and (c) matching across different sensing modalities (e.g., IR and visible-light cameras). We show examples of recovering homographies and fundamental matrices under such conditions.  相似文献   

19.
Given a tree T with n edges and a set W of n weights, we deal with labelings of the edges of T with weights from W, optimizing certain objective functions. For some of these functions the optimization problem is shown to be NP-complete (e.g., finding a labeling with minimal diameter), and for others we find polynomial-time algorithms (e.g., finding a labeling with minimal average distance).  相似文献   

20.
Various methods were proposed to detect/match special interest points (keypoints) in images and some of them (e.g., SIFT and SURF) are among the most cited techniques in computer vision research. This paper describes an algorithm to discriminate between genuine and spurious keypoint correspondences on planar surfaces. We draw random samples of the set of correspondences, from which homographies are obtained and their principal eigenvectors extracted. Density estimation on that feature space determines the most likely true transform. Such homography feeds a cost function that gives the goodness of each keypoint correspondence. Being similar to the well-known RANSAC strategy, the key finding is that the main eigenvector of the most (genuine) homographies tends to represent a similar direction. Hence, density estimation in the eigenspace dramatically reduces the number of transforms actually evaluated to obtain reliable estimations. Our experiments were performed on hard image data sets, and pointed that the proposed approach yields effectiveness similar to the RANSAC strategy, at significantly lower computational burden, in terms of the proportion between the number of homographies generated and those that are actually evaluated.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号