首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 343 毫秒
1.
2.
We compare different empirical learning methods regarding their strategy of reduction of the subset with important examples in a measurement space. In this way a uniform view of AI and statistical methods alike is presented. Theoretically they are all based on the Bayes' classifier. They all construct a classification rule which partitions measurement space into target sets. Advantages and drawbacks of different methods are highlightened by simple examples. Error analyses enable deeper understanding of the learning and classification process in real world domains, characterized by incomplete and noisy data. Good error estimate is based on the balance between bias and variance. A deviation of the Laplacean error estimate is presented. We presented new mechanisms that use redundant knowledge in the explicit form. One of such systems, GINESYS (Generic INductive Expert SYstem Shell) is shortly presented. Heuristic reasoning and empirical results indicate that a proper use of redundant knowledge significantly increases classification accuracy. Over 10 basic AI and statistical systems were tested on two oncological domains. Results show that older AI methods provide usable information regarding the structure of data, but, on the other hand, their classification accuracy is often lower than that of the statistical methods. Standard statistical systems often achieve good classification accuracy, but are more or less non-transparent to users. Some new AI systems construct robust redundant knowledge, provide explanations in a humanly understandable way and outperform the classification accuracy of standard statistical methods.  相似文献   

3.
As a result of technological advancements and telecommunication innovations the nature of work performed by all classes of workers has undergone drastic changes. This is evidenced by the increased automation of long-run, short-cycled manual work and the increased application of word processors and microcomputers in the office. These changes demand new and more structured approaches to measuring and controlling work. One such approach is the application of a knowledge-based system to select the most appropriate work measurement technique for determining the expected or “standard” task completion time, depending on the nature of the task.  相似文献   

4.
Current statistical machine translation systems are mainly based on statistical word lexicons. However, these models are usually context-independent, therefore, the disambiguation of the translation of a source word must be carried out using other probabilistic distributions (distortion distributions and statistical language models). One efficient way to add contextual information to the statistical lexicons is based on maximum entropy modeling. In that framework, the context is introduced through feature functions that allow us to automatically learn context-dependent lexicon models.In a first approach, maximum entropy modeling is carried out after a process of learning standard statistical models (alignment and lexicon). In a second approach, the maximum entropy modeling is integrated in the expectation-maximization process of learning standard statistical models.Experimental results were obtained for two well-known tasks, the French–English Canadian Parliament Hansards task and the German–English Verbmobil task. These results proved that the use of maximum entropy models in both approaches, can help to improve the performance of the statistical translation systems.This work has been partially supported by the European Union under grant IST-2001-32091 and by the Spanish CICYT under project TIC-2003-08681-C02-02. The experiments on the Verbmobil task were done when the first author was a visiting scientist at RWTH Aachen-Germany.Editors: Dan Roth and Pascale Fung  相似文献   

5.
Abstract

The analysis of scatter from rough surfaces has been of interest to researchers for many years. The comparison between theory and measurement has not always produced results that instill confidence in either the theory or the measurements. One would like to be able to construct the required surfaces to have control of the target as well as the measurements. There has been some work in the past to construct target surfaces, however, the statistics of the surface could only be determined after the fact. This paper presents some results of work to generate physical surfaces from known (i.e. desired) surface statistical properties. This study extends previous work on the generation of random surfaces for use in computer simulation approaches. The known statistical surface is extended using a bicubic spline technique and these results are interfaced to a numerically-controlled machine to generate the physical surface. A portion of a complete surface with Gaussian statistics was constructed and tested to measure conformity to the desired statistics.  相似文献   

6.
Six-sigma (i.e. 6 standard deviations) is a parameter that is used in statistical models of the quality of manufactured goods (including computer hardware). It also serves as a slogan that suggests high quality. Some attempts have been made in the past to apply 6-sigma to software quality measurement. Software engineers often look to hardware analogies to suggest techniques that are useful in building, maintaining or evaluating software systems. The author explains why the 6-sigma approach to hardware quality simply does not work when applied to software quality  相似文献   

7.
Applications in evolutionary programming have suggested the use of further stable probability distributions, such as Cauchy and Lévy, in the random process associated with the mutations, as an alternative to the traditional, also stable, normal distribution. This work goes further along the encouraging results of the latter, by extending them in a self-adaptive way, with algorithms that are in tune with the standard lineage of evolutionary programming. Evaluations that rely upon standard analytical benchmarking functions and comparative performance tests between them were carried out in respect to the baseline defined by the standard evolutionary programming algorithm that relies on normal distribution. Additional comparative studies were made in respect to various self-adaptive approaches, also proposed herein, and a method drawn from the literature. The results lead to numerical and statistical superiority of the more general stable distribution based approach, when compared with the baseline, and is unclear in regard to the method drawn from the literature, possibly due to distinct implementation details.  相似文献   

8.
Work productivity is typically associated with production standard times. Harder production standards generally result in higher work productivity. However, the tasks become more repetitive in harder production standard time and workers may be exposed to higher rates of acute responses which will lead to higher risks of contracting work-related musculoskeletal disorders (WMSDs). Hence, this paper seeks to investigate the relationship between work productivity and acute responses at different levels of production standard times. Twenty industrial workers performed repetitive tasks at three different levels of production standard time (PS), corresponding to “normal (PSN)”, “hard (PSH)” and “very hard (PSVH)”. The work productivity and muscle activity were recorded along these experimental tasks. The work productivity target was not attainable for hard and very hard production standard times. This can be attributed to the manifestations of acute responses (muscle activity, muscle fatigue, and perceived muscle fatigue), which increases as the production standard time becomes harder. There is a strong correlation between muscle activity, perceived muscle fatigue and work productivity at different levels of production standard time. The relationship among these variables is found to be significantly linear (R = 0.784, p < 0.01). The findings of this study are indeed beneficial to assess the existing work productivity of workers and serves as a reference for future work productivity planning in order to minimize the risk of contracting WMSDs.  相似文献   

9.
This paper presents a new method for distributed vision-aided cooperative localization and navigation for multiple inter-communicating autonomous vehicles based on three-view geometry constraints. Each vehicle is equipped with a standard inertial navigation system and an on-board camera only. In contrast to the traditional approach for cooperative localization, which is based on relative pose measurements, the proposed method formulates a measurement whenever the same scene is observed by different vehicles. Each such measurement is comprising of three images, which are not necessarily captured at the same time. The captured images, to which some navigation parameters are attached, are stored in repositories by some of the vehicles in the group. A graph-based approach is applied for calculating the correlation terms between the navigation parameters associated to images participating in the same measurement. The proposed method is examined using a statistical simulation and is further validated in an experiment that involved two vehicles in a holding pattern scenario. The experiments show that the cooperative three-view-based vision-aided navigation may considerably improve the performance of an inferior INS.  相似文献   

10.
The importance of reasonably accurate (within ± 10%) indirect labor standards is recognized by the vast majority of practicing industrial engineers. Traditional work measurement methodologies including stopwatch study, standard data, and fundamental motion data can do the job, but these techniques are often not cost effective because of the time required to develop fair standards in advance of the work being done. Slotting methods such as used in the technique referred to as “Universal Indirect Labor Standards” allow the relatively rapid assignment of standards in a very short time. A method of developing Universal Indirect Labor Standards that will give satisfactory results is to slot a sample of benchmark jobs in the form of the gamma distribution. The value of each slot is computed by calculating the expected value (mean) of the gamma distribution that characterizes each slot. The computer is an effective tool to make these laborious computation and arrive at good universal standard values for each slot in the distribution characterized by the benchmark jobs.  相似文献   

11.
针对在小范围场景进行单目视觉三维重建过程中,稠密点云模型存在大量离群点的现象,提出一种改进的点云滤波算法。将多视图稠密重建(Patch-based Multi-View Stereo,PMVS)算法与统计分析法相融合,对利用PMVS算法得到的稠密点云进行统计分析,设定标准距离并求解点云中每一个点到其所有邻近点的平均距离,去除平均距离大于标准距离的点。实验结果表明,融合后的点云滤波算法不仅剔除了大量离群点,还在保证目标物体细节特征的情况下对冗余的特征点进行一定程度上的消除,在提高重建表面真实度和精度的同时,为后期测量装配工作提供了可靠模型。  相似文献   

12.
SAX(symbolic aggregate approximation)是一种符号化的时间序列相似性度量方法,该方法在对时间序列划分时,采用了PAA算法的均值划分,但均分点无法有效描述序列的形态变化,导致序列间对应分段均值相似的情况下,SAX无法有效区分序列之间的相似度.在SAX算法的基础上,提出了基于关键点的SAX改进算法(KP_SAX),该算法的相似性度量公式既可描述时间序列自身数值变化的统计规律,又可描述时间序列形态变化.实验结果表明:KP_SAX虽然部分提高了算法的复杂度,但可在SAX算法无法计算序列相似度的情况下,有效计算各序列间的相似度距离,达到了改进的目的.  相似文献   

13.
Two types of redundancies are contained in images: statistical redundancy and psychovisual redundancy. Image representation techniques for image coding should remove both redundancies in order to obtain good results. In order to establish an appropriate representation, the standard approach to transform coding only considers the statistical redundancy, whereas the psychovisual factors are introduced after the selection of the representation as a simple scalar weighting in the transform domain.In this work, we take into account the psychovisual factors in the definition of the representation together with the statistical factors, by means of the perceptual metric and the covariance matrix, respectively. In general the ellipsoids described by these matrices are not aligned. Therefore, the optimal basis for image representation should simultaneously diagonalize both matrices. This approach to the basis selection problem has several advantages in the particular application of image coding. As the transform domain is Euclidean (by definition), the quantizer design is highly simplified and at the same time, the use of scalar quantizers is truly justified. The proposed representation is compared to covariance-based representations such as the DCT and the KLT or PCA using standard JPEG-like and Max-Lloyd quantizers.  相似文献   

14.
本项目的完成将为博士研究生招生实现网络化,将极大方便考生的报名、交费、资料登记、准考证打印、考场编排、考试成绩查询、学生录取通知等工作;为我校博士招生工作部门的节省大量时间和精力及成本,例如招生工作中的统计考生报考情况,编准考证号、打印准考证、考场安排等工作;为促进我校数字化校园的提升,为我校211建设评估增添一份力量。  相似文献   

15.
Based on a simple and effective calculation method, two free‐space measurement setups are employed to investigate the dielectric properties of various materials at terahertz (THz) frequencies. One setup involves THZ time‐domain spectroscopy (THz‐TDS) at a frequency range of 0.4 to 1 THz. The other setup comprises a vector network analyzer (VNA) with pairs of VAN extenders (VNAXs) and diagonal standard gain horns (SGHs) at a frequency range of 0.22 to 1.1 THz. The calculation method is verified for the THz‐TDS system and employed in the VNA system for the first time. Dielectric properties, including refractive indices, power absorption coefficients, relative permittivities, and loss tangents, are calculated from measured transmission data. Several materials, including printed circuit boards and 3D printing materials, are characterized to verify the calculation method and compare the measurement setups.  相似文献   

16.
OBJECTIVES: Biometrical comparison procedures for cardiac imaging methods with continuous outcome are reviewed mainly concentrating on assessment and design adequate comparison of accuracy and precision. Univariate graphical and numerical representation of corresponding deviations is outlined to derive a 'check list' of minimum information necessary to compare the measurement methods. DATA: The methods reviewed here are illustrated by the comparison of standard 2DE bidimensional cardial volumetry versus assessment using TDE colour imaging in 28 normal probands. SOURCES: The paired t-test and the corresponding confidence interval approach are used to assess deviations in location of two imaging methods; the test procedures of Maloney and Rastogi Hahn and Nelson and Grubbs are surveyed as proposals for the comparison of precisions in paired data. The Krippendorff coefficient and the Bradley/Blackwood test are illustrated as surrogate measures for method concordance. CONCLUSIONS: Since these methods can be performed by simple modification of standard options available in most statistics software packages, this review intends to enable cardiologists to choose appropriate methods for statistical data analysis and representation on their own.  相似文献   

17.
文中介绍了一个基于国家标准的软件质量度量模型和评价过程模型(简称JT-SQE模型)的设计思想和主要技术,度量模型为四层次树形结构,评价过程及需求定义,度量和计分三步步骤,该模型为评价软件质量提出了一套可行的评价体系,并简单介绍了基于该模型的软件质量评价工具的结构和功能。  相似文献   

18.
In the present study, biomedical based application was developed to classify the data belongs to normal and abnormal samples generated by Doppler ultrasound. This study consists of raw data obtaining and pre-processing, feature extraction and classification steps. In the pre-processing step, a high-pass filter, white de-noising and normalization were used. During the feature extraction step, wavelet entropy was applied by wavelet transform and short time fourier transform. Obtained features were classified by fuzzy discrete hidden Markov model (FDHMM). For this purpose, a FDHMM that consists of Sugeno and Choquet integrals and λ fuzzy measurement was defined to eliminate statistical dependence assumptions to increase the performance and to have better flexibility. Moreover, Sugeno integral was used together with triangular norms that are mentioned frequently in the literature in order to increase the performance. Experimental results show that recognition rate obtained by Sugeno fuzzy integral with triangular norm is more successful than recognition rates obtained by standard discrete HMM (DHMM) and Choquet integral based FDHMM. In addition to this, it is shown in this study that the performance of the Sugeno integral based method is better than the performances of artificial neural network (ANN) and HMM based classification systems that were used in previous studies of the authors.  相似文献   

19.
This work focus on fast nearest neighbor (NN) search algorithms that can work in any metric space (not just the Euclidean distance) and where the distance computation is very time consuming. One of the most well known methods in this field is the AESA algorithm, used as baseline for performance measurement for over twenty years. The AESA works in two steps that repeats: first it searches a promising candidate to NN and computes its distance (approximation step), next it eliminates all the unsuitable NN candidates in view of the new information acquired in the previous calculation (elimination step).This work introduces the PiAESA algorithm. This algorithm improves the performance of the AESA algorithm by splitting the approximation criterion: on the first iterations, when there is not enough information to find good NN candidates, it uses a list of pivots (objects in the database) to obtain a cheap approximation of the distance function. Once a good approximation is obtained it switches to the AESA usual behavior. As the pivot list is built in preprocessing time, the run time of PiAESA is almost the same than the AESA one.In this work, we report experiments comparing with some competing methods. Our empirical results show that this new approach obtains a significant reduction of distance computations with no execution time penalty.  相似文献   

20.
The standard continuous time state space model with stochastic disturbances contains the mathematical abstraction of continuous time white noise. To work with well defined, discrete time observations, it is necessary to sample the model with care. The basic issues are well known, and have been discussed in the literature. However, the consequences have not quite penetrated the practice of estimation and identification. One example is that the standard model of an observation, being a snapshot of the current state plus noise independent of the state, cannot be reconciled with this picture. Another is that estimation and identification of time continuous models require a more careful treatment of the sampling formulas. We discuss and illustrate these issues in the current contribution. An application of particular practical importance is the estimation of models based on irregularly sampled observations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号