首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 437 毫秒
1.
Bayesian models of human learning are becoming increasingly popular in cognitive science. We argue that their purported confirmation largely relies on a methodology that depends on premises that are inconsistent with the claim that people are Bayesian about learning and inference. Bayesian models in cognitive science derive their appeal from their normative claim that the modeled inference is in some sense rational. Standard accounts of the rationality of Bayesian inference imply predictions that an agent selects the option that maximizes the posterior expected utility. Experimental confirmation of the models, however, has been claimed because of groups of agents that “probability match” the posterior. Probability matching only constitutes support for the Bayesian claim if additional unobvious and untested (but testable) assumptions are invoked. The alternative strategy of weakening the underlying notion of rationality no longer distinguishes the Bayesian model uniquely. A new account of rationality—either for inference or for decision-making—is required to successfully confirm Bayesian models in cognitive science.  相似文献   

2.
The latest Deep Learning (DL) models for detection and classification have achieved an unprecedented performance over classical machine learning algorithms. However, DL models are black-box methods hard to debug, interpret, and certify. DL alone cannot provide explanations that can be validated by a non technical audience such as end-users or domain experts. In contrast, symbolic AI systems that convert concepts into rules or symbols – such as knowledge graphs – are easier to explain. However, they present lower generalization and scaling capabilities. A very important challenge is to fuse DL representations with expert knowledge. One way to address this challenge, as well as the performance-explainability trade-off is by leveraging the best of both streams without obviating domain expert knowledge. In this paper, we tackle such problem by considering the symbolic knowledge is expressed in form of a domain expert knowledge graph. We present the eXplainable Neural-symbolic learning (X-NeSyL) methodology, designed to learn both symbolic and deep representations, together with an explainability metric to assess the level of alignment of machine and human expert explanations. The ultimate objective is to fuse DL representations with expert domain knowledge during the learning process so it serves as a sound basis for explainability. In particular, X-NeSyL methodology involves the concrete use of two notions of explanation, both at inference and training time respectively: (1) EXPLANet: Expert-aligned eXplainable Part-based cLAssifier NETwork Architecture, a compositional convolutional neural network that makes use of symbolic representations, and (2) SHAP-Backprop, an explainable AI-informed training procedure that corrects and guides the DL process to align with such symbolic representations in form of knowledge graphs. We showcase X-NeSyL methodology using MonuMAI dataset for monument facade image classification, and demonstrate that with our approach, it is possible to improve explainability at the same time as performance.  相似文献   

3.
Achieving high performance for concurrent applications on modern multiprocessors remains challenging. Many programmers avoid locking to improve performance, while others replace locks with non-blocking synchronization to protect against deadlock, priority inversion, and convoying. In both cases, dynamic data structures that avoid locking require a memory reclamation scheme that reclaims elements once they are no longer in use.The performance of existing memory reclamation schemes has not been thoroughly evaluated. We conduct the first fair and comprehensive comparison of three recent schemes—quiescent-state-based reclamation, epoch-based reclamation, and hazard-pointer-based reclamation—using a flexible microbenchmark. Our results show that there is no globally optimal scheme. When evaluating lockless synchronization, programmers and algorithm designers should thus carefully consider the data structure, the workload, and the execution environment, each of which can dramatically affect the memory reclamation performance.We discuss the consequences of our results for programmers and algorithm designers. Finally, we describe the use of one scheme, quiescent-state-based reclamation, in the context of an OS kernel—an execution environment which is well suited to this scheme.  相似文献   

4.
Region-Based Memory Management   总被引:1,自引:0,他引:1  
This paper describes a memory management discipline for programs that perform dynamic memory allocation and de-allocation. At runtime, all values are put intoregions. The store consists of a stack of regions. All points of region allocation and de-allocation are inferred automatically, using a type and effect based program analysis. The scheme does not assume the presence of a garbage collector. The scheme was first presented in 1994 (M. Tofte and J.-P. Talpin,in“Proceedings of the 21st ACM SIGPLAN–SIGACT Symposium on Principles of Programming Languages,” pp. 188–201); subsequently, it has been tested in The ML Kit with Regions, a region-based, garbage-collection free implementation of the Standard ML Core language, which includes recursive datatypes, higher-order functions and updatable references L. Birkedal, M. Tofte, and M. Vejlstrup, (1996),in“Proceedings of the 23 rd ACM SIGPLAN–SIGACT Symposium on Principles of Programming Languages,” pp. 171–183. This paper defines a region-based dynamic semantics for a skeletal programming language extracted from Standard ML. We present the inference system which specifies where regions can be allocated and de-allocated and a detailed proof that the system is sound with respect to a standard semantics. We conclude by giving some advice on how to write programs that run well on a stack of regions, based on practical experience with the ML Kit.  相似文献   

5.
The saturation algorithm for symbolic state-space exploration   总被引:1,自引:0,他引:1  
We present various algorithms for generating the state space of an asynchronous system based on the use of multiway decision diagrams to encode sets and Kronecker operators on boolean matrices to encode the next-state function. The Kronecker encoding allows us to recognize and exploit the “locality of effect” that events might have on state variables. In turn, locality information suggests better iteration strategies aimed at minimizing peak memory consumption. In particular, we focus on the saturation strategy, which is completely different from traditional breadth-first symbolic approaches, and extend its applicability to models where the possible values of the state variables are not known a priori. The resulting algorithm merges “on-the-fly” explicit state-space generation of each submodel with symbolic state-space generation of the overall model. Each algorithm we present is implemented in our tool SmArT. This allows us to run fair and detailed comparisons between them on a suite of representative models. Saturation, in particular, is shown to be many orders of magnitude more efficient in terms of memory and time with respect to traditional methods.  相似文献   

6.
We investigate the potential of the analysis of noisy non-stationary time series by quantising it into streams of discrete symbols and applying finite-memory symbolic predictors. Careful quantisation can reduce the noise in the time series to make model estimation more amenable. We apply the quantisation strategy in a realistic setting involving financial forecasting and trading. In particular, using historical data, we simulate the trading of straddles on the financial indexes DAX and FTSE 100 on a daily basis, based on predictions of the daily volatility differences in the underlying indexes. We propose a parametric, data-driven quantisation scheme which transforms temporal patterns in the series of daily volatility changes into grammatical and statistical patterns in the corresponding symbolic streams. As symbolic predictors operating on the quantised streams, we use the classical fixed-order Markov models, variable memory length Markov models and a novel variation of fractal-based predictors, introduced in its original form in Tin_ o and Dorffner [1]. The fractal-based predictors are designed to efficiently use deep memory. We compare the symbolic models with continuous techniques such as time-delay neural networks with continuous and categorical outputs, and GARCH models. Our experiments strongly suggest that the robust information reduction achieved by quantising the real-valued time series is highly beneficial. To deal with non-stationarity in financial daily time series, we propose two techniques that combine ‘sophisticated’ models fitted on the training data with a fixed set of simple-minded symbolic predictors not using older (and potentially misleading) data in the training set. Experimental results show that by quantising the volatility differences and then using symbolic predictive models, market makers can sometimes generate a statistically significant excess profit. We also mention some interesting observations regarding the memory structure in the series of daily volatility differences studied.  相似文献   

7.
In this paper we present a new approach to a symbolic treatment of quantified statements having the following form Q A's are B's, knowing that A and B are labels denoting sets, and Q is a linguistic quantifier interpreted as a proportion evaluated in a qualitative way. Our model can be viewed as a symbolic generalization of statistical conditional probability notions as well as a symbolic generalization of the classical probabilistic operators. Our approach is founded on a symbolic finite M-valued logic in which the graduation scale of M symbolic quantifiers is translated in terms of truth degrees. Moreover, we propose symbolic inference rules allowing us to manage quantified statements.  相似文献   

8.
Description logics with aggregates and concrete domains   总被引:4,自引:0,他引:4  
  相似文献   

9.
Continuous superpositions of Ornstein-Uhlenbeck processes are proposed as a model for asset return volatility. An interesting class of continuous superpositions is defined by a Gamma mixing distribution which can define long memory processes. In contrast, previously studied discrete superpositions cannot generate this behaviour. Efficient Markov chain Monte Carlo methods for Bayesian inference are developed which allow the estimation of such models with leverage effects. The continuous superposition model is applied to both stock index and exchange rate data. The continuous superposition model is compared with a two-component superposition on the daily Standard and Poor’s 500 index from 1980 to 2000.  相似文献   

10.
We present an adaptive tessellation scheme for surfaces consisting of parametric patches. The resulting tessellations are topologically uniform, yet consistent and watertight across boundaries of patches with different tessellation levels. Our scheme is simple to implement, requires little memory and is well suited for instancing, a feature available on current Graphical Processing Units that allows a substantial performance increase. We describe how the scheme can be implemented efficiently and give performance benchmarks comparing it to some other approaches.  相似文献   

11.
卷积神经网络是目前人工智能领域在图像识别与处理相关应用中的关键技术之一,广泛的应用使对其鲁棒性研究的重要性不断凸显。以往对于卷积神经网络鲁棒性的研究较为笼统,且多集中在对抗鲁棒性方面。这难以更深入地研究神经网络鲁棒性的发生机制,已经不适应人工智能的发展。引入神经科学的相关研究,提出了视觉鲁棒性的概念,通过研究神经网络模型与人类视觉系统的相似性,揭示了神经网络鲁棒性的内在缺陷。回顾了近年来神经网络鲁棒性的研究现状,并分析了神经网络模型缺乏鲁棒性的原因。神经网络缺乏鲁棒性体现在其对于微小扰动的敏感性,其原因在于神经网络会更倾向于学习人类难以感知的高频信息用于计算和推理。而这部分高频信息很容易被扰动所破坏,最终导致模型出现判断错误。传统鲁棒性的研究大多关注模型的数学性质,无法突破神经网络的天然局限性。视觉鲁棒性在传统鲁棒性的概念上进行拓展。传统鲁棒性概念衡量模型对于失真变形的图像样本的辨识能力,失真样本与原始干净样本在鲁棒模型上都能保持正确的输出。视觉鲁棒性衡量模型与人类判别能力的一致性。这需要将神经科学和心理学的研究方法、成果与人工智能相结合。回顾了神经科学在视觉领域的发展,讨论了认知心理学的研究方法在神经网络鲁棒性研究上的应用。人类视觉系统在学习和抽象能力上具有优势,神经网络模型在计算和记忆速度方面强于人类。人脑的生理结构与神经网络模型的逻辑结构的差异是导致神经网络鲁棒性问题的关键因素。视觉鲁棒性的研究需要对人类的视觉系统有更深刻的理解。揭示人类视觉系统与神经网络模型在认知机制上的差异,并对算法进行有效的改进,这是神经网络鲁棒性乃至人工智能算法的主要发展趋势。  相似文献   

12.
13.
Bias/variance analysis is a useful tool for investigating the performance of machine learning algorithms. Conventional analysis decomposes loss into errors due to aspects of the learning process, but in relational domains, the inference process used for prediction introduces an additional source of error. Collective inference techniques introduce additional error, both through the use of approximate inference algorithms and through variation in the availability of test-set information. To date, the impact of inference error on model performance has not been investigated. We propose a new bias/variance framework that decomposes loss into errors due to both the learning and inference processes. We evaluate the performance of three relational models on both synthetic and real-world datasets and show that (1) inference can be a significant source of error, and (2) the models exhibit different types of errors as data characteristics are varied.  相似文献   

14.
Real-time Animation of Dressed Virtual Humans   总被引:4,自引:0,他引:4  
In this paper, we describe a method for cloth animation in real‐time. The algorithm works in a hybrid manner exploiting the merits of both the physical‐based and geometric deformations. It makes use of predetermined conditions between the cloth and the body model, avoiding complex collision detection and physical deformations wherever possible. Garments are segmented into pieces that are simulated by various algorithms, depending on how they are laid on the body surface and whether they stick or flow on it. Tests show that the method is well suited to fully dressed virtual human models, achieving real‐time performance compared to ordinary cloth‐simulations.  相似文献   

15.
The lattice‐Boltzmann method is well suited for implementation in single‐instruction multiple‐data (SIMD) environments provided by general purpose graphics processing units (GPGPUs). This paper discusses the integration of these GPGPU programs with OpenMP to create lattice‐Boltzmann applications for multi‐GPU clusters. In addition to the standard single‐phase single‐component lattice‐Boltzmann method, the performances of more complex multiphase, multicomponent models are also examined. The contributions of various GPU lattice‐Boltzmann parameters to the performance are examined and quantified with a statistical model of the performance using Analysis of Variance (ANOVA). By examining single‐ and multi‐GPU lattice‐Boltzmann simulations with ANOVA, we show that all the lattice‐Boltzmann simulations primarily depend on effects corresponding to simulation geometry and decomposition, and not on the architectural aspects of GPU. Additionally, using ANOVA we confirm that the metrics of Efficiency and Utilization are not suitable for memory‐bandwidth‐dependent codes. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

16.
Voxel‐based approaches are today's standard to encode volume data. Recently, directed acyclic graphs (DAGs) were successfully used for compressing sparse voxel scenes as well, but they are restricted to a single bit of (geometry) information per voxel. We present a method to compress arbitrary data, such as colors, normals, or reflectance information. By decoupling geometry and voxel data via a novel mapping scheme, we are able to apply the DAG principle to encode the topology, while using a palette‐based compression for the voxel attributes, leading to a drastic memory reduction. Our method outperforms existing state‐of‐the‐art techniques and is well‐suited for GPU architectures. We achieve real‐time performance on commodity hardware for colored scenes with up to 17 hierarchical levels (a 128K3voxel resolution), which are stored fully in core.  相似文献   

17.
18.
We present a system that can separate and recognize the simultaneous speech of two people recorded in a single channel. Applied to the monaural speech separation and recognition challenge, the system out-performed all other participants – including human listeners – with an overall recognition error rate of 21.6%, compared to the human error rate of 22.3%. The system consists of a speaker recognizer, a model-based speech separation module, and a speech recognizer. For the separation models we explored a range of speech models that incorporate different levels of constraints on temporal dynamics to help infer the source speech signals. The system achieves its best performance when the model of temporal dynamics closely captures the grammatical constraints of the task. For inference, we compare a 2-D Viterbi algorithm and two loopy belief-propagation algorithms. We show how belief-propagation reduces the complexity of temporal inference from exponential to linear in the number of sources and the size of the language model. The best belief-propagation method results in nearly the same recognition error rate as exact inference.  相似文献   

19.
This paper provides a rationale for the possible development of software interfaces, the purpose of which is to provide opportunities for an improvising musician to construct a performance which, on the one hand allows them to engage with sonic and visual material produced by generative algorithms and on the other challenges the conscious and subconscious cognitive processes which govern their normal performance practice. Both cognitive psychology and communication theory offer great insight into the evolution of human cognisance and provide pointers to models with which the activity of musical improvisation could be interpreted. In the course of this paper I have tried to relate academic concepts and theories to material gleaned from improvising musicians, giving credence to their opinions and drawing inference from their experiences.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号