首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Unsupervised topological ordering, similar to Kohonen's (1982, Biological Cybernetics, 43: 59-69) self-organizing feature map, was achieved in a connectionist module for competitive learning (a CALM Map) by internally regulating the learning rate and the size of the active neighbourhood on the basis of input novelty. In this module, winner-take-all competition and the 'activity bubble' are due tograded lateral inhibition between units. It tends to separate representations as far apart as possible, which leads to interpolation abilities and an absence of catastrophic interference when the interfering set of patterns forms an interpolated set of the initial data set. More than the Kohonen maps, these maps provide an opportunity for building psychologically and neurophysiologically motivated multimodular connectionist models. As an example, the dual pathway connectionist model for fear conditioning by Armony et al. (1997, Trends in Cognitive Science, 1: 28-34) was rebuilt and extended with CALM Maps. If the detection of novelty enhances memory encoding in a canonical circuit, such as the CALM Map, this could explain the finding of large distributed networks for novelty detection (e.g. Knight and Scabini, 1998, Journal of Clinical Neurophysiology, 15: 3-13) in the brain.  相似文献   

2.
There exist many ideas and assumptions about the development and meaning of modularity in biological and technical neural systems. We empirically study the evolution of connectionist models in the context of modular problems. For this purpose, we define quantitative measures for the degree of modularity and monitor them during evolutionary processes under different constraints. It turns out that the modularity of the problem is reflected by the architecture of adapted systems, although learning can counterbalance some imperfection of the architecture. The demand for fast learning systems increases the selective pressure towards modularity.  相似文献   

3.
The difference between integral and separable interaction of dimensions is a classic problem in cognitive psychology (Garner 1970, American Psychologist, 25: 350-358, Shepard 1964, Journal of Mathematical Psychology, 1: 54-87) and remains an essential component of most current experimental and theoretical analyses of category learning (e.g. Ashby and Maddox 1994, Journal of Mathematical Psychology, 38: 423-466, Goldstone 1994, Journal of Experimental Psychology: General , 123: 178-200, Kruschke 1993, Connection Science, 5: 3-36, Melara et al. 1993, Journal of Experimental Psychology: Human Perception & Performance, 19: 1082-1104, Nosofsky 1992, Multidimensional Models of Perception and Cognition, Hillsdale NJ: Lawrence Erlbaum). So far the problem has been addressed through post hoc analysis in which empirical evidence of integral and separable processing is used to fit human data, showing how the impact of a pair of dimensions interacting in an integral or a separable manner enters into later learning processes. In this paper, we argue that a mechanistic connectionist explanation for variations in dimensional interactions can provide a new perspective through exploration of how similarities between stimuli are transformed from physical to psychological space when learning to identify, discriminate and categorize them. We substantiate this claim by demonstrating how even a standard backpropagation network combined with a simple image-processing Gabor filter component provides limited but clear potential to process monochromatic stimuli that are composed of integral pairs of dimensions differently from monochromatic stimuli that are composed of separable pairs of dimensions. Interestingly, the responses from Gabor filters are shown already to capture most ofthe dimensional interaction, which in turn can be operated upon by the neural network during a given learning task. In addition, we introduce a basic attention mechanism to back-propagation that gives it the ability to attend selectively to relevant dimensions and illustrate how this serves the model in solving a filtration versus condensation task (Kruschke 1993, Connection Science, 5: 3-36). The model may serve as a starting point in characterizing the general properties of the human perceptual system that causes some pairs of physical dimensions to be treated as integrally interacting and other pairs as separable. An improved understanding of these properties will aid studies in perceptual and category learning, selective attention effects and influences of higher cognitive processes on initial perceptual representations.  相似文献   

4.
Fodor and Pylyshyn [(1988). Connectionism and cognitive architecture: A critical analysis. Cognition, 28(1–2), 3–71] famously argued that neural networks cannot behave systematically short of implementing a combinatorial symbol system. A recent response from Frank et al. [(2009). Connectionist semantic systematicity. Cognition, 110(3), 358–379] claimed to have trained a neural network to behave systematically without implementing a symbol system and without any in-built predisposition towards combinatorial representations. We believe systems like theirs may in fact implement a symbol system on a deeper and more interesting level: one where the symbols are latent – not visible at the level of network structure. In order to illustrate this possibility, we demonstrate our own recurrent neural network that learns to understand sentence-level language in terms of a scene. We demonstrate our model's learned understanding by testing it on novel sentences and scenes. By paring down our model into an architecturally minimal version, we demonstrate how it supports combinatorial computation over distributed representations by using the associative memory operations of Vector Symbolic Architectures. Knowledge of the model's memory scheme gives us tools to explain its errors and construct superior future models. We show how the model designs and manipulates a latent symbol system in which the combinatorial symbols are patterns of activation distributed across the layers of a neural network, instantiating a hybrid of classical symbolic and connectionist representations that combines advantages of both.  相似文献   

5.
In this two-part series, we explore how a perceptually based foundation for natural language semantics might be acquired, via association of sensory/motor experiences with verbal utterances describing those experiences. In Part 1, we introduce a novel neural network architecture, termed Katamic memory, that is inspired by the neurocircuitry of the cerebellum and that exhibits (a) rapid/robust sequence learning/recogmtion and (b) allows integrated learning and performance. These capabilities are due to novel neural elements, which model dendritic structure and function in greater detail than in standard connectionist models. In Part 2, we describe the DETE system, a massively parallel proceduraljneural hybrid model that utilizes over 50 Katamic memory modules to perform two associative learning tasks: (a) verbal-to-visual / motor association—given a verbal sequence, DETE learns to regenerate a neural representation of the visual sequence being described and/or to carry out motor commands; and (b) visual/motor-to-verbal association—given a visual/motor sequence, DETE learns to produce a verbal sequence describing the visual input. DETE can learn verbal sequences describing spatial relations and motions of 2D 'blob-like objects; in addition, the system can also generalize to novel inputs. DETE has been tested successfully on small, restricted subsets of English and Spanish—languages that differ in inflectional properties, word order and how they categorize perceptual reality.  相似文献   

6.
This paper deals with the integration of neural and symbolic approaches. It focuses on associative memories where a connectionist architecture tries to provide a storage and retrieval component for the symbolic level. In this light, the classic model for associative memory, the Hopfield network is briefly reviewed. Then, a new model for associative memory, the hybrid Hopfield-clique network is presented in detail. Its application to a typically symbolic task, the post -processing of the output of an optical character recognizer, is also described. In the author's view, the hybrid Hopfield -clique network constitutes an example of a successful integration of the two approaches. It uses a symbolic learning scheme to train a connectionist network, and through this integration, it can provide perfect storage and recall. As a conclusion, an analysis of what can be learned from this specific architecture is attempted. In the case of this model, a guarantee for perfect storage and recall can only be given because it was possible to analyze the problem using the well-defined symbolic formalism of graph theory. In general, we think that finding an adequate formalism for a given problem is an important step towards solving it.  相似文献   

7.
Berkeley et al. (1995, Connection Science, 7: 167–186) introduced a novel technique for analysing the hidden units of connectionist networks that had been trained using the backpropagation learning procedure. The literature concerning banding analysis is equivocal with respect to the kinds of processing units this technique can be used on. In this paper, it will be shown that, contrary to the claims in some published sources, banding analysis can be conducted on networks that use standard processing units that have a sigmoid activation function. The analytic process is then illustrated and the potential benefits of this kind of technique are discussed.  相似文献   

8.
A connectionist architecture is developed that can be used for modeling choice probabilities and reaction times in identification tasks. The architecture consists of a feedforward network and a decoding module, and learning is by mean-variance back-propagation, an extension of the standard back-propagation learning algorithm. We suggest that the new learning procedure leads to a better model of human learning in simple identification tasks than does standard back-propagation. Choice probabilities are modeled by the input-output relations of the network and reaction times are modeled by the time taken for the network, particularly the decoding module, to achieve a stable state. In this paper, the model is applied to the identification of unidimensional stimuli; applications to the identification of multidimensional stimuli—visual displays and words—is mentioned and presented in more detail in other papers. The strengths and weaknesses of this connectionist approach vis-à-vis other approaches are discussed  相似文献   

9.
The paper discusses a connectionist implementation of knowledge engineering concepts and concepts related to production systems in particular. Production systems are one of the most used artificial intelligence techniques as well as a widely explored model of cognition. The use of neural networks for building connectionist production systems opens the door for developing production systems with partial match and approximate reasoning. An architecture of a neural production system (NPS) and its third realization—NPS3, designed to facilitate approximate reasoning—are presented in the paper. NPS3 facilitates partial match between facts and rules, variable binding, different conflict resolution strategies and chain inference. Facts are represented in a working memory by so-called certainty degrees. Different inference control parameters are attached to every production rule. Some of them are known neuronal parameters, receiving an engineering meaning here. Others, which have their context in knowledge engineering, have been implemented in a connectionist way. The partial match implemented in NPS3 is demonstrated on the same test production system as used by other authors. The ability of NPS3 for approximate reasoning is illustrated by reasoning over a set of simple diagnostic productions and a set of decision support fuzzy rules.  相似文献   

10.
This paper specifies the main features of connectionist and brain-like connectionist models; argues for the need for, and usefulness of, appropriate successively larger brainlike structures; and examines parallel-hierarchical Recognition Cone models of perception from this perspective, as examples of networks exploiting such structures (e.g. local receptive fields, global convergence-divergence). The anatomy, physiology, behavior, and development of the visual system are briefly summarized to motivate the architecture of brain-structured networks for perceptual recognition. Results are presented from simulations of carefully pre-designed Recognition Cone structures that perceive objects (e.g. houses) in digitized photographs. A framework for perceptual learning is introduced, including mechanisms for generation learning, i.e. the growth of new links and possibly, nodes, subject to brain-like topological constraints. The information processing transforms discovered through feedback-guided generation are fine-tuned by feedback-guided reweighting of links. Some preliminary results are presented of brain-structured networks that learn to recognize simple objects (e.g. letters of the alphabet, cups, apples, bananas) through generation and reweighting of transforms. These show large improvements over networks that either lack brain-like structure or/and learn by reweighting of links alone. It is concluded that brain-like structures and generation learning can significantly increase the power of connectionist models.  相似文献   

11.
We introduce a new connectionist paradigm which views neural networks as implementations of syntactic pattern recognition algorithms. Thus, learning is seen as a process of grammatical inference and recognition as a process of parsing. Naturally, the possible realizations of this theme are diverse; in this paper we present some initial explorations of the case where the pattern grammar is context-free, inferred (from examples) by a separate procedure, and then mapped onto a connectionist paper. Unlike most neural networks for which structure is pre-defined, the resulting network has as many levels as are necessary and arbitrary connections between levels. Furthermore, by the addition of a delay element, the network becomes capable of dealing with time-varying patterns in a simple and efficient manner. Since grammatical inference algorithms are notoriously expensive computationally, we place an important restriction on the type of context-free grammars which can be inferred. This dramatically reduces complexity. The resulting grammars are called ‘strictly-hierarchical’ and map straightforwardly onto a temporal connectionist parser (TCP) using a relatively small number of neurons. The new paradigm is applicable to a variety of pattern-processing tasks such as speech recognition and character recognition. We concentrate here on hand-written character recognition; performance in other problem domains will be reported in future publications. Results are presented to illustrate the performance of the system with respect to a number of parameters, namely, the inherent variability of the data, the nature of the learning (supervised or unsupervised) and the details of the clustering procedure used to limit the number of non-terminals inferred. In each of these cases (eight in total), we contrast the performance of a stochastic and a non-stochastic TCP. The stochastic TCP does have greater powers of discrimination, but in many cases the results were very similar. If this result holds in practical situations it is important, because the non-stochastic version has a straightforward implementation in silicon.  相似文献   

12.
This paper introduces a new type of artificial neural network (GasNets) and shows that it is possible to use evolutionary computing techniques to find robot controllers based on them. The controllers are built from networks inspired by the modulatory eff ects of freely diff using gases, especially nitric oxide, in real neuronal networks. Evolutionary robotics techniques were used to develop control networks and visual morphologies to enable a robot to achieve a target discrimination task under very noisy lighting conditions. A series of evolutionary runs with and without the gas modulation active demonstrated that networks incorporating modulation by diff using gases evolved to produce successful controllers considerably faster than networks without this mechanism. GasNets also consistently achieved evolutionary success in far fewer evaluations than were needed when using more conventional connectionist style networks.  相似文献   

13.
神经网络在线自学习跟踪控制及其在伺服系统中的应用   总被引:1,自引:0,他引:1  
针对传统自适应和自校正控制中存在的问题,提出一种基于神经网络的在线自学习控制方法,既做到了对象模型的在线辨识和控制器的在线设计,又避免了神经网络控制方法通常存在的实时控制的困难,使复杂系统的在线学习控制成为可能。仿真表明该方法具有良好的鲁棒性和控制精度。  相似文献   

14.
A bstract . Fodor and Pylyshyn argued that connectionist models could not be used to exhibit and explain a phenomenon that they termed systematicity, and which they explained by possession of composition syntax and semantics for mental representations and structure sensitivity of mental processes. This inability of connectionist models, they argued, was particularly serious since it meant that these models could not be used as alternative models to classical symbolic models to explain cognition. In this paper, a connectionist model is used to identify some properties which collectively show that connectionist networks supply means for accomplishing a stronger version ofsystematicity than Fodor and Pylyshyn opted for. It is argued that 'context-dependent systematicity' is achievable within a connectionist framework. The arguments put forward rest on a particular formulation of content and context of connectionist representation, firmly and technically based on connectionist primitives in a learning environment. The perspective is motivated by the fundamental differences between the connectionist and classical architectures, in terms of prerequisites, lower-level functionality and inherent constraints. The claim is supported by a set of experiments using a connectionist architecture that demonstrates both an ability of enforcing, what Fodor and Pylyshyn term systematic and nonsystematic processing using a single mechanism, and how novel items can be handled without prior classification. The claim relies on extended learning feedback which enforces representational context dependence.  相似文献   

15.
While retroactive interference (RI) is a well-known phenomenon in humans, the differential effect of the structure of the learning material was only seldom addressed. Mirman and Spivey (2001 Mirman, D and Spivey, M. 2001. Retroactive interference in neural networks and in humans: the effect of pattern-based learning. Connection Science, 13: 257275.  [Google Scholar], Connection Science, 13: 257–275) reported on behavioural results that show more RI for the subjects exposed to ‘Structured’ items than for those exposed to ‘Unstructured’ items. These authors claimed that two complementary memory systems functioning on radically different neural mechanisms are required to account for the behavioural results they reported. Using the same paradigm but controlling for proactive interference, we found the opposite pattern of results, that is, more RI for subjects exposed to ‘Unstructured’ items than for those exposed to ‘Structured’ items (experiment 1). Two additional experiments showed that this structure effect on RI is a genuine one. Experiment 2 confirmed that the design of experiment 1 forced the subjects from the ‘Structured’ condition to learn the items at the exemplar level, thus allowing for a close match between the two to-be-compared conditions (as ‘Unstructured’ condition items can be learned only at the exemplar level). Experiment 3 verified that the subjects from the ‘Structured’ condition could generalize to novel items. Simulations conducted with a three-layer neural network, that is, a single-memory system, produced a pattern of results that mirrors the structure effect reported here. By construction, Mirman and Spivey's architecture cannot simulate this behavioural structure effect. The results are discussed within the framework of catastrophic interference in distributed neural networks, with an emphasis on the relevance of these networks to the modelling of human memory.  相似文献   

16.
Weight-perturbation (WP) algorithms for supervised and/or reinforcement learning offer improved biological plausibility over backpropagation because of their reduced circuitry requirements for realization in neural hardware. This paper explores the hypothesis that biological synaptic noise might serve as the substrate by which weight perturbation is implemented. We explore the basic synaptic noise hypothesis (BSNH), which embodies the weakest assumptions about the underlying neural circuitry required to implement WP algorithms. This paper identifies relevant biological constraints consistent with the BSNH, taxonomizes existing WP algorithms with regard to consistency with those constraints, and proposes a new WP algorithm that is fully consistent with the constraints. By comparing the learning effectiveness of these algorithms via simulation studies, it is found that all of the algorithms can support traditional neural network learning tasks and have similar generalization characteristics, although the results suggest a trade-off between learning efficiency and biological accuracy.  相似文献   

17.
Most known learning algorithms for dynamic neural networks in non-stationary environments need global computations to perform credit assignment. These algorithms either are not local in time or not local in space. Those algorithms which are local in both time and space usually cannot deal sensibly with ‘hidden units’. In contrast, as far as we can judge, learning rules in biological systems with many ‘hidden units’ are local in both space and time. In this paper we propose a parallel on-line learning algorithms which performs local computations only, yet still is designed to deal with hidden units and with units whose past activations are ‘hidden in time’. The approach is inspired by Holland's idea of the bucket brigade for classifier systems, which is transformed to run on a neural network with fixed topology. The result is a feedforward or recurrent ‘neural’ dissipative system which is consuming ‘weight-substance’ and permanently trying to distribute this substance onto its connections in an appropriate way. Simple experiments demonstrating the feasibility of the algorithm are reported.  相似文献   

18.
Continuous-valued recurrent neural networks can learn mechanisms for processing context-free languages. The dynamics of such networks is usually based on damped oscillation around fixed points in state space and requires that the dynamical components are arranged in certain ways. It is shown that qualitatively similar dynamics with similar constraints hold for anbncn , a context-sensitive language. The additional difficulty with anbncn , compared with the context-free language anbn , consists of 'counting up' and 'counting down' letters simultaneously. The network solution is to oscillate in two principal dimensions, one for counting up and one for counting down. This study focuses on the dynamics employed by the sequential cascaded network, in contrast to the simple recurrent network, and the use of backpropagation through time. Found solutions generalize well beyond training data, however, learning is not reliable. The contribution of this study lies in demonstrating how the dynamics in recurrent neural networks that process context-free languages can also be employed in processing some context-sensitive languages (traditionally thought of as requiring additional computation resources). This continuity of mechanism between language classes contributes to our understanding of neural networks in modelling language learning and processing.  相似文献   

19.
Abstract

The present paper describes the application of neural networks to obtain a model for estimating the stability of gas metal arc welding (GMAW) process. A neural network has been developed to obtain and model the relationships between the acoustic emission (AE) signal parameters and the stability of GMAW process. Statistical and temporal parameters of AE signals have been used as input of the neural networks; a multilayer feedforward neural network has been used, trained with back propagation method, and using Levenberg Marquardt's algorithm for different network architectures. Different welding conditions have been studied to analyse the incidence of the parameters of the process in acoustic signals. The AE signals have been processed by using the wavelet transform, and have been characterised statistically. Experimental results are provided to illustrate the proposed approach. Finally a statistical analysis for the validation of the experimental results obtained is presented. As a main result of the study, the effectiveness of the application of the artificial neural networks for modelling stability analysis in welding processes has been demonstrated. The regression analysis demonstrates the validity of neural networks to predict the stability of welding process using the statistical characterisation of the signal parameters of AE that have been calculated.  相似文献   

20.
This paper focuses on adaptive motor control in the kinematic domain. Several motor-learning strategies from the literature are adopted to kinematic problems: ‘feedback-error learning’, ‘distal supervised learning’, and ‘direct inverse modelling’ (DIM). One of these learning strategies, DIM, is significantly enhanced by combining it with abstract recurrent neural networks. Moreover, a newly developed learning strategy (‘learning by averaging’) is presented in detail. The performance of these learning strategies is compared with different learning tasks on two simulated robot setups (a robot-camera-head and a planar arm). The results indicate a general superiority of DIM if combined with abstract recurrent neural networks. Learning by averaging shows consistent success if the motor task is constrained by special requirements.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号