首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 37 毫秒
1.
Regarding computer vision as optimal decision making under uncertainty, a new optimization paradigm is introduced, namely, maximizing the product of the likelihood function and the posterior distribution on scene hypotheses given the results of feature extraction. Essentially this approach is a Bayesian formulation of hypothesis generation and verification. The approach is illustrated for model-based object recognition in range imagery, showing how segmentation results can optimally be incorporated into model matching. Several new match criteria for model based object recognition in range imagery are deduced from the theory. The text was submitted by the author in English. Walter Armbruster (born 1948 in Linz, Austria) graduated with a degree in Mathematics from the University of Heidelberg in 1975, where he also received his Ph.D. degree (Dr. rer. nat) in 1980. Subsequently, he worked in the Dept. of Mathematics, publishing research work in several journals including Econometrica. Since 1985 he has been a research scientist at the FOM, where he presently heads the project group “target recognition with laser radar.” A member of the NATO RTO, he has published several dozen articles in the fields of target tracking, helicopter obstacle avoidance, autonomous navigation, and 3D object recognition; some of these articles, however, are not publicly distributed.  相似文献   

2.
This paper reports on conceptual development in applications of neural networks to data mining and knowledge discovery. Hypothesis generation is one of the significant differences of data mining from statistical analyses. Nonlinear pattern hypothesis generation is a major task of data mining and knowledge discovery. Yet, few methods of nonlinear pattern hypothesis generation are available.

This paper proposes a model of data mining to support nonlinear pattern hypothesis generation. This model is an integration of linear regression analysis model, Kohonen's self-organizing maps, the algorithm for convex polytopes, and back-propagation neural networks.  相似文献   


3.
神经网络与非线性模式数据挖掘研究   总被引:1,自引:2,他引:1  
邓乾罡  孟波 《计算机工程与设计》2004,25(10):1667-1668,1694
论述了人工智能技术在数据挖掘领域应用的一些理论进展。非线性模式的规则提取是数据挖掘的一个主要任务,然而,目前有效的方法却很少。着重论述了一个专用于对非线性模式数据进行数据挖掘的模型,并且给出了简要的算法和一个例子。  相似文献   

4.
Random hypothesis generation is integral to many robust geometric model fitting techniques. Unfortunately, it is also computationally expensive, especially for higher order geometric models and heavily contaminated data. We propose a fundamentally new approach to accelerate hypothesis sampling by guiding it with information derived from residual sorting. We show that residual sorting innately encodes the probability of two points having arisen from the same model, and is obtained without recourse to domain knowledge (e.g., keypoint matching scores) typically used in previous sampling enhancement methods. More crucially, our approach encourages sampling within coherent structures and thus can very rapidly generate all-inlier minimal subsets that maximize the robust criterion. Sampling within coherent structures also affords a natural ability to handle multistructure data, a condition that is usually detrimental to other methods. The result is a sampling scheme that offers substantial speed-ups on common computer vision tasks such as homography and fundamental matrix estimation. We show on many computer vision data, especially those with multiple structures, that ours is the only method capable of retrieving satisfactory results within realistic time budgets.  相似文献   

5.
The main principles of the JSM method for automatic hypothesis generation are considered along with the problems of its development. The formalization of J.S. Mill’s joint method of agreement and difference is proposed and the concept of a JSM strategy is defined. The article also considers two possible development directions of artificial intelligence and its connection with cognitive research.  相似文献   

6.
Efficient hypothesis generation plays an important role in robust model fitting. In this study, based on the combination of residual sorting and local constraints, we propose an efficient guided hypothesis generation method, called Rapid Hypothesis Generation (RHG). By exploiting the local constraints to guide the hypothesis generation process, RHG raises the probability of generating promising hypotheses and reduces the computational cost during hypotheses generation. Experimental results on homography and fundamental matrix estimation show that RHG can effectively guide hypothesis generation process and rapidly generate promising hypotheses for heavily contaminated multi-structure data.  相似文献   

7.
8.
In an effort to make object recognition efficient and accurate enough for real applications; we have developed three probabilistic techniques-sensor modeling, probabilistic hypothesis generation, and robust localization-which form the basis of a promising paradigm for object recognition. Our techniques effectively exploit prior knowledge to reduce the number of hypotheses that must be tested during recognition. Our recognition approach utilizes statistical constraints on the matches between image and model features. These statistical constraints are computed using a model of the entire sensing process-resulting in more realistic and tighter constraints on matches. The candidate hypotheses are pruned by probabilistic constraint satisfaction to select likely matches based on the image evidence and prior statistical constraints. The resulting hypotheses are ordered most-likely first for verification. Thus minimizing unnecessary verifications. The reliability of the verification decision is significantly increased by the use of a robust localization algorithm  相似文献   

9.
10.
At the main cryptography conference, CRYPTO, in 1989, Quisquater and colleagues published a paper showing how to explain the complex notion of zero-knowledge proof in a simpler way that children can understand. In the same line of work, this article presents simple and intuitive explanations of various modern security concepts and technologies, including symmetric encryption, public key encryption, homomorphic encryption, intruder models (CPA, CCA1, CCA2), and security properties (OW, IND, NM). The explanations given in this article may also serve in demystifying such complex security notions for non-expert adults.  相似文献   

11.
In this article, I stress the importance of focusing on sexualities in critical analyses of computer technologies. Using the example of late nineteenth and early twentieth century vibrators, I demonstrate that by studying historically remote, predigital technologies, students can develop the language and analytical skills needed to interrogate the mutual construction of sexualities and computer technologies. Furthermore, I argue that examining the intersections of sexualities and computer technologies is especially important in networked computer classrooms where students’ sexual identities and concepts of sexuality not only shape interactions with peers and with technologies but can determine the quality of the educational experience for all.  相似文献   

12.
Due to its advantages such as ubiquity and immediacy, mobile banking has attracted traditional banks’ interests. However, a survey report showed that user adoption of mobile banking was much lower than that of other mobile services. The extant research focuses on explaining user adoption from technology perceptions such as perceived usefulness, perceived ease of use, interactivity, and relative advantage. However, users’ adoption is determined not only by their perception of the technology but also by the task technology fit. In other words, even though a technology may be perceived as being advanced, if it does not fit users’ task requirements, they may not adopt it. By integrating the task technology fit (TTF) model and the unified theory of acceptance and usage of technology (UTAUT), this research proposes a mobile banking user adoption model. We found that performance expectancy, task technology fit, social influence, and facilitating conditions have significant effects on user adoption. In addition, we also found a significant effect of task technology fit on performance expectancy.  相似文献   

13.
14.
Using paths to measure, explain, and enhance program behavior   总被引:1,自引:0,他引:1  
Ball  T. Larus  J.R. 《Computer》2000,33(7):57-65
What happens when a computer program runs? The answer can be frustratingly elusive, as anyone who has debugged or tuned a program knows. As it runs, a program overwrites its previous state, which might have provided a clue as to how the program got to the point at which it computed the wrong answer or otherwise failed. This all-too-common experience is symptomatic of a more general problem: the difficulty of accurately and efficiently capturing and analyzing the sequence of events that occur when a program executes. Program paths offer an insight into a program's dynamic behavior that is difficult to achieve any other way. Unlike simpler measures such as program profiles, which aggregate information to reduce the cost of collecting or storing data, paths capture some of the usually invisible dynamic sequencing of statements. The article exploits the insight that program statements do not execute in isolation, but are typically correlated with the behavior of previously executed code  相似文献   

15.
《Ergonomics》2012,55(2):292-302
Abstract

The objectives of the study were threefold: (1) to develop factor-score-based models to predict maximum mass on a box-lifting task using multiple regressions; (2) to compare predictive and explanatory powers of factor-score-based models to models derived from data-level variables; and (3) to apply these findings to ergonomic research and practical problem-solving situations. Forty-eight volunteers (25 women and 23 men) completed a maximal box-lifting task and a maximal isoinertial lifting test on an Incremental Lifting Machine (ILM). Dynamic data collected during isoinertial testing were summarized into 32 lift parameters, and then subjected to principal components analyses using the ‘FACTOR PROCEDURE’ from the Statistical Analysis System (SAS). Factor scores were calculated for each participant on each of the four factors comprising the final solution, and multiple regression equations for men, women and combined data were generated using the ‘GENERAL LINEAR MODELS’ procedure from SAS. Results revealed that prediction of box-lifting performance was optimized when regression equations were developed using numerous data-level variables as predictors, i.e., all 32 lift parameters and ILM mass. In comparison, explanation was enhanced but predictive capabilities were reduced when linear models were formed using ILM mass and the factor scores derived from analyses of isoinertial lifting. The use of variables loading on the factors gave slightly increased predictive power than did the factor-score-based models. Similar trends in predictive and explanatory powers appeared when the data were analysed according to gender. Ergonomic applications of factor-score-based models were discussed with regard to ongoing research as well as to practical problem-solving situations. It was concluded that the advantages and usefulness of factor-score-based models warranted their inclusion in future investigations of lifting performance.  相似文献   

16.
This empirical work aims to shed some light on the governance choice for information technology (IT) outsourcing decisions. By combining transaction cost and resource-based arguments, we explain the role that some economic and strategic factors as well as their relationships may play. Hypotheses are tested for the implementation of an HR software application with primary data collected from large Spanish firms. Findings seem to provide more support for resource-based arguments than for transaction-cost propositions. Thus, our results suggest that cumulative knowledge from either coordination and interaction between internal units or experience in IT outsourcing is not a significant factor unless the organization is able to develop a strategic capability. Unlike for technology specificity, no evidence was found for the significance of behavioral uncertainty and strategic contribution.  相似文献   

17.
This paper presents the results of computer experiments on fact bases of different subject fields, viz., pharmacology and medical diagnostics. The procedures of the JSM method for automatic hypothesis generation, including the simple similarity method, prohibition of (±)-contrary instances, singularity of (+) causes, difference method, joint similarity-difference method, and the method of residues, are applied to the data. The comparative analysis of different strategies is carried out. The paper also demonstrates an important cause-effect dependence that was revealed using oncological data, viz., the relationship between the S100 protein and the lifespan of patients with melanoma.  相似文献   

18.
19.
The paper is concerned with the problem of function approximation on a finite training sample. Generalization ability of an approximation method is characterized by a probability of large deviation of a test sample error from the training sample error. We obtain upper bounds on this probability based on combinatorial inclusion-exclusion techniques and metric properties of a set A of binary error vectors induced by a given approximating family of functions on a finite population. We introduce a notion of connectivity-splitting profile of A; accounting for a connectivity degree q in generalization bounds allows to reduce the bound by the factor exponential in q.  相似文献   

20.
The TUBA system consists of a set of integrated tools for the generation of business-oriented applications. Tools and applications have a modular structure, represented by class objects. The article describes the architecture of the environments for file processing, screen handling and report writing.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号

京公网安备 11010802026262号