首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Speech perception relies on the human ability to decode continuous, analogue sound pressure waves into discrete, symbolic labels (phonemes) with linguistic meaning. Aspects of this signal-to-symbol transformation have been intensively studied over many decades, using psychophysical procedures. The perception of (synthetic) syllable-initial stop consonants has been especially well studied, since these sounds display a marked categorization effect: they are typically dichotomised into voiced and unvoiced classes according to their voice onset time (VOT). In this case, the category boundary is found to have a systematic relation to the (simulated) place of articulation, but there is no currently-accepted explanation of this phenomenon. Categorization effects have now been demonstrated in a variety of animal species as well as humans, indicating that their origins lie in general auditory and/or learning mechanisms, rather than in some phonetic module specialized to human speech processing.In recent work, we have demonstrated that appropriately-trained computational learning systems (neural networks) also display the same systematic behaviour as human and animal listeners. Networks are trained on simulated patterns of auditory-nerve firings in response to synthetic continuua of stop-consonant/vowel syllables varying in place of articulation and VOT. Unlike real listeners, such a software model is amenable to analysis aimed at extracting the phonetic knowledge acquired in training, so providing a putative explanation of the categorization phenomenon. Here, we study three learning systems: single-layer perceptrons, support vector machines and Fisher linear discriminants. We highlight similarities and differences between these approaches. We find that the modern inductive inference technique for small sample sizes of support vector machines gives the most convincing results. Knowledge extracted from the trained machine indicated that the phonetic percept of voicing is easily and directly recoverable from auditory (but not acoustic) representations.  相似文献   

2.
Experiment 1 explored the impact of physically touching a virtual object on how realistic the virtual environment (VE) seemed to the user. Subjects in a no touch group picked up a 3D virtual image of a kitchen plate in a VE, using a traditional 3D wand. See and touch subjects physically picked up a virtual plate possessing solidity and weight, using a technique called tactile augmentation. Afterwards, subjects made predictions about the properties of other virtual objects they saw but did not interact with in the VE. See and touch subjects predicted these objects would be more solid, heavier, and more likely to obey gravity than the no touch group. In Experiment 2 (a pilot study), subjects physically bit a chocolate bar in one condition, and imagined biting a chocolate bar in another condition. Subjects rated the event more fun and realistic when allowed to physically bite the chocolate bar. Results of the two experiments converge with a growing literature showing the value of adding physical qualities to virtual objects. This study is the first to empirically demonstrate the effectiveness of tactile augmentation as a simple, safe, inexpensive technique with large freedom of motion for adding physical texture, force feedback cues, smell and taste to virtual objects. Examples of practical applications are discussed.Based in part on Physically touching virtual objects using tactile augmentation enhances the realism of virtual environments' by Hunter Hoffman which appeared in the Proceedings of the IEEE Virtual Reality Annual International Symposium '98, Atlanta GA, pp 59–63. ¢ 1998 IEEE.  相似文献   

3.
Environmental protection activities in industry have rapidly increased in number over the last years. Additionally, surveys of environmental activities have identified a change in the kind or in the approaches used to environmental problem solving. A new paradigm Clean Technology has been developed which gradually seems to replace the Clean-up Technology paradigm and the older Dilute and Disperse paradigm. The new Clean Technology paradigm brings with it not only a new way of looking at environmental protection, but also a range of rules guiding the application of technology and the design of technological systems. This paper presents a few case studies highlighting and evaluating Clean Technology activities.  相似文献   

4.
The notion of obvious inference in predicate logic is discussed from the viewpoint of proof-checker applications in logic and mathematics education. A class of inferences in predicate logic is defined and it is proposed to identify it with the class of obvious logical inferences. The definition is compared with other approaches. The algorithm for implementing the obviousness decision procedure follows directly from the definition.  相似文献   

5.
In this paper we discuss a view of the Machine Learning technique called Explanation-Based Learning (EBL) or Explanation-Based Generalization (EBG) as a process for the interpretation of vague concepts in logic-based models of law.The open-textured nature of legal terms is a well-known open problem in the building of knowledge-based legal systems. EBG is a technique which creates generalizations of given examples on the basis of background domain knowledge. We relate these two topics by considering EBG's domain knowledge as corresponding to statute law rules, and EBG's training example as corresponding to a precedent case.By making the interpretation of vague predicates as guided by precedent cases, we use EBG as an effective process capable of creating a link between predicates appearing as open-textured concepts in law rules, and predicates appearing as ordinary language wording for stating the facts of a case.Standard EBG algorithms do not change the deductive closure of the domain theory. In the legal context, this is only adequate when concepts vaguely defined in some law rules can be reformulated in terms of other concepts more precisely defined in other rules. We call theory reformulation the process adopted in this situation of complete knowledge.In many cases, however, statutory law leaves some concepts completely undefined. We then propose extensions to the EBG standard that deal with this situation of incomplete knowledge, and call theory revision the extended process. In order to fill in knowledge gaps we consider precedent cases supplemented by additional heuristic information. The extensions proposed treat heuristics represented by abstraction hierarchies with constraints and exceptions.In the paper we also precisely characterize the distinction between theory reformulation and theory revision by stating formal definitions and results, in the context of the Logic Programming theory.We offer this proposal as a possible contribution to cross fertilization between machine learning and legal reasoning methods.  相似文献   

6.
Modular Control and Coordination of Discrete-Event Systems   总被引:1,自引:0,他引:1  
In the supervisory control of discrete-event systems based on controllable languages, a standard way to handle state explosion in large systems is by modular supervision: either horizontal (decentralized) or vertical (hierarchical). However, unless all the relevant languages are prefix-closed, a well-known potential hazard with modularity is that of conflict. In decentralized control, modular supervisors that are individually nonblocking for the plant may nevertheless produce blocking, or even deadlock, when operating on-line concurrently. Similarly, a high-level hierarchical supervisor that predicts nonblocking at its aggregated level of abstraction may inadvertently admit blocking in a low-level implementation. In two previous papers, the authors showed that nonblocking hierarchical control can be guaranteed provided high-level aggregation is sufficiently fine; the appropriate conditions were formalized in terms of control structures and observers. In this paper we apply the same technique to decentralized control, when specifications are imposed on local models of the global process; in this way we remove the restriction in some earlier work that the plant and specification (marked) languages be prefix-closed. We then solve a more general problem of coordination: namely how to determine a high level coordinator that forestalls conflict in a decentralized architecture when it potentially arises, but is otherwise minimally intrusive on low-level control action. Coordination thus combines both vertical and horizontal modularity. The example of a simple production process is provided as a practical illustration. We conclude with an appraisal of the computational effort involved.  相似文献   

7.
When Physical Systems Realize Functions...   总被引:1,自引:1,他引:0  
After briefly discussing the relevance of the notions computation and implementation for cognitive science, I summarize some of the problems that have been found in their most common interpretations. In particular, I argue that standard notions of computation together with a state-to-state correspondence view of implementation cannot overcome difficulties posed by Putnam's Realization Theorem and that, therefore, a different approach to implementation is required. The notion realization of a function, developed out of physical theories, is then introduced as a replacement for the notional pair computation-implementation. After gradual refinement, taking practical constraints into account, this notion gives rise to the notion digital system which singles out physical systems that could be actually used, and possibly even built.  相似文献   

8.
Jacob L. Mey 《AI & Society》1996,10(3-4):226-232
Technology, in order to be human, needs to be informed by a reflection on what it is to be a tool in ways appropriate to humans. This involves both an instrumental, appropriating aspect (I use this tool) and a limiting, appropriated one (The tool uses me).Cognitive Technology focuses on the ways the computer tool is used, and uses us. Using the tool on the world changes the way we think about the world, and the way the world appears to us: as an example, a simple technology (the leaf blower) and its effects on the human are discussed.Closing address at the First International Cognitive Technology Conference, Hong Kong, 24–29 August 1995.  相似文献   

9.
A quotation from Shakespeare's play King Lear, I will teach you differences, encapsulates the spirit of this paper. The distinction is introduced between three different categories of knowledge: i) propositional knowledge, ii) skill or practical knowledge and iii) knowledge of familiarity. In the present debate on Information Society, there is a clear tendency to overemphasise the theoretical knowledge at the expense of practical knowledge thereby completely ignoring the knowledge of familiarity. It is argued that different forms of theoretical knowledge are required for the design of current computer technology and the study of the practice of computer usage. The concept of dialogue and the concept of To Follow a Rule therefore fundamental to the understanding of the practice of computer usage.  相似文献   

10.
A Writing Support Tool with Multiple Views   总被引:1,自引:0,他引:1  
This paper describes both SuperText,a computer program designed to support productiveexpository writing processes among students at adistance teaching university, and its theoreticaljustification. Being able to write well is animportant communication skill, and the writingprocess can help to build and clarify the writersknowledge. Computers can support this by providing amedium to externalise and record the writersunderstanding. Representations appropriate to thisexternalisation are uninstantiated idea labels,instantiated text units, and a variety ofrelationships between these items. SuperText usesthese representations to support a range of writingstyles. It provides several independent Views thatrepresent the structure of the evolving documentthrough expanding hierarchies, each with a varietyof Presentations. Allied to these Views is a textwork space providing access to a database ofcontinuous text nodes. Taken together, these providean ability to represent global and intermediatestructures of the document well beyond that ofconventional editors. These aspects were all ratedhighly by students participating in a series offield trials of SuperText.  相似文献   

11.
This paper aims to provide a basis for renewed talk about use in computing. Four current discourse arenas are described. Different intentions manifest in each arena are linked to failures in translation, different terminologies crossing disciplinary and national boundaries non-reflexively. Analysis of transnational use discourse dynamics shows much miscommunication. Conflicts like that between the Scandinavian System Development School and the usability approach have less current salience. Renewing our talk about use is essential to a participatory politics of information technology and will lead to clearer perception of the implications of letting new systems becoming primary media of social interaction.  相似文献   

12.
Summary Recently prepositional modal logic of programs, called prepositional dynamic logic, has been developed by many authors, following the ideas of Fisher and Ladner [1] and Pratt [12]. The main purpose of this paper is to present a Gentzen-type sequential formulation of this logic and to establish its semantical completeness with due regard to sequential formulation as such. In a sense our sequential formulation might be regarded as a powerful tool to establish the completeness theorem of already familiar axiomatizations of prepositional dynamic logic such as seen in Harel [4], Parikh [11] or Segerberg [15]. Indeed our method is powerful enough in completeness proof to yield a desired structure directly without making a detour through such intermediate constructs as a pseudomodel or a nonstandard structure, which can be seen in Parikh [11]. We also show that our sequential system of prepositional dynamic logic does not enjoy the so-called cut-elimination theorem.  相似文献   

13.
On the Morality of Artificial Agents   总被引:4,自引:0,他引:4  
Artificial agents (AAs), particularly but not only those in Cyberspace, extend the class of entities that can be involved in moral situations. For they can be conceived of as moral patients (as entities that can be acted upon for good or evil) and also as moral agents (as entities that can perform actions, again for good or evil). In this paper, we clarify the concept of agent and go on to separate the concerns of morality and responsibility of agents (most interestingly for us, of AAs). We conclude that there is substantial and important scope, particularly in Computer Ethics, for the concept of moral agent not necessarily exhibiting free will, mental states or responsibility. This complements the more traditional approach, common at least since Montaigne and Descartes, which considers whether or not (artificial) agents have mental states, feelings, emotions and so on. By focussing directly on mind-less morality we are able to avoid that question and also many of the concerns of Artificial Intelligence. A vital component in our approach is the Method of Abstraction for analysing the level of abstraction (LoA) at which an agent is considered to act. The LoA is determined by the way in which one chooses to describe, analyse and discuss a system and its context. The Method of Abstraction is explained in terms of an interface or set of features or observables at a given LoA. Agenthood, and in particular moral agenthood, depends on a LoA. Our guidelines for agenthood are: interactivity (response to stimulus by change of state), autonomy (ability to change state without stimulus) and adaptability (ability to change the transition rules by which state is changed) at a given LoA. Morality may be thought of as a threshold defined on the observables in the interface determining the LoA under consideration. An agent is morally good if its actions all respect that threshold; and it is morally evil if some action violates it. That view is particularly informative when the agent constitutes a software or digital system, and the observables are numerical. Finally we review the consequences for Computer Ethics of our approach. In conclusion, this approach facilitates the discussion of the morality of agents not only in Cyberspace but also in the biosphere, where animals can be considered moral agents without their having to display free will, emotions or mental states, and in social contexts, where systems like organizations can play the role of moral agents. The primary cost of this facility is the extension of the class of agents and moral agents to embrace AAs.  相似文献   

14.
Reaching an agreement or understanding through argumentation is an important aspect of decision making in a virtual society as well as in our real society. In this paper, we consider compromise (Aufheben) and concession (weaker Aufheben) as a simple form of Hegelian dialectical reasoning, which we think are desiderata to deliberate or cognitive agents. We then propose an argument-based agent system that allows for the issue modification among agents concerned during argumentation and reaching an agreement or understanding through argumentation with the dialectical reasoning capability. We illustrate its potential usefulness by showing applications to seller and buyer agents and traveling salesman agents in e-commerce.  相似文献   

15.
Unification algorithms have been constructed for semigroups and commutative semigroups. This paper considers the intermediate case of partially commutative semigroups. We introduce classesN and of such semigroups and justify their use. We present an equation-solving algorithm for any member of the classN. This algorithm is relative to having an algorithm to determine all non-negative solutions of a certain class of diophantine equations of degree 2 which we call -equations. The difficulties arising when attempting to solve equations in members of the class are discussed, and we present arguments that strongly suggest that unification in these semigroups is undecidable.  相似文献   

16.
In terms of Groenendijk and Stokhofs (1984) formalization of exhaustive interpretation, many conversational implicatures can be accounted for. In this paper we justify and generalize this approach. Our justification proceeds by relating their account via Halpern and Moses (1984) non-monotonic theory of only knowing to the Gricean maxims of Quality and the first sub-maxim of Quantity. The approach of Groenendijk and Stokhof (1984) is generalized such that it can also account for implicatures that are triggered in subclauses not entailed by the whole complex sentence.  相似文献   

17.
If one interprets the ecology of technology as the study of technology in relation to its environment, there are two important levels at which this study can be made. It is possible to consider the different environments in Europe, Japan and the USA, and look for the different technological influences which accompany them. At a more general level, one can look at those factors which are common to all three environments, and which are associated with generic similarities in the technology of all three areas.The paper considers both aspects as they have been experienced in Europe in some attempts to develop a human-centred technology.Paper given at Conference on Ecology of Science and Technology, Japan Science Foundation, Tokyo, 1992.  相似文献   

18.
Summary Three elegant proofs and an efficient algorithm are derived. The derivations evolve smoothly from the choice to apply mathematical induction, the pattern of reasoning that has been chosen as the Leitmotiv for this small collection. The last proof is the by-product of the algorithm.  相似文献   

19.
In this paper we present a fragment of (positive) relevant logic which can be computed by a straightforward extension to SLD resolution while allowing full nesting of implications. These two requirements lead quite naturally to a fragment in which the major feature is an ambiguous user-level conjunction which is interpreted intensionally in query positions and extensionally in assertion positions. These restrictions allow a simple and efficient extension to SLD resolution (and more particularly, the PROLOG evaluation scheme) with quite minor loss in expressive power.  相似文献   

20.
A dialectical model of assessing conflicting arguments in legal reasoning   总被引:2,自引:2,他引:0  
Inspired by legal reasoning, this paper presents a formal framework for assessing conflicting arguments. Its use is illustrated with applications to realistic legal examples, and the potential for implementation is discussed. The framework has the form of a logical system for defeasible argumentation. Its language, which is of a logic-programming-like nature, has both weak and explicit negation, and conflicts between arguments are decided with the help of priorities on the rules. An important feature of the system is that these priorities are not fixed, but are themselves defeasibly derived as conclusions within the system. Thus debates on the choice between conflicting arguments can also be modelled.The proof theory of the system is stated in dialectical style, where a proof takes the form of a dialogue between a proponent and an opponent of an argument. An argument is shown to be justified if the proponent can make the opponent run out of moves in whatever way the opponent attacks. Despite this dialectical form, the system reflects a declarative, or relational approach to modelling legal argument. A basic assumption of this paper is that this approach complements two other lines of research in AI and Law, investigations of precedent-based reasoning and the development of procedural, or dialectical models of legal argument.Supported by a research fellowship of the Royal Netherlands Academy of Arts and Sciences, and by Esprit WG 8319 Modelage.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号