首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 362 毫秒
1.
In this paper we use free fall approach to develop a high level control/command strategy for a bipedal robot called BIPMAN, based on a multi-chain mechanical model with a general control architecture. The strategy is composed of three levels: the Legs and arms level, the Coordinator level and the Supervisor level. The Coordinator level is devoted to controlling leg movements and to ensure the stability of the whole biped. Actually perturbation effects threaten the equilibrium of the human robot and can only be compensated using a dynamic control strategy. This one is based on dynamic stability studies with a center of mass acceleration control and a force distribution on each leg and arm. Free fall in the gravity field is assumed to be deeply involved in the human locomotor control. According to studies of this specific motion through a direct dynamic model,the notion of equilibrium classes is introduced. They allow one to define time intervals in which the biped is able to maintain its posture. This notion is used for the definition of a reconfigurable high level control of the robot.  相似文献   

2.
Unification algorithms have been constructed for semigroups and commutative semigroups. This paper considers the intermediate case of partially commutative semigroups. We introduce classesN and of such semigroups and justify their use. We present an equation-solving algorithm for any member of the classN. This algorithm is relative to having an algorithm to determine all non-negative solutions of a certain class of diophantine equations of degree 2 which we call -equations. The difficulties arising when attempting to solve equations in members of the class are discussed, and we present arguments that strongly suggest that unification in these semigroups is undecidable.  相似文献   

3.
A nonlinear stochastic integral equation of the Hammerstein type in the formx(t; ) = h(t, x(t; )) + s k(t, s; )f(s, x(s; ); )d(s) is studied wheret S, a measure space with certain properties, , the supporting set of a probability measure space (,A, P), and the integral is a Bochner integral. A random solution of the equation is defined to be an almost surely continuousm-dimensional vector-valued stochastic process onS which is bounded with probability one for eacht S and which satisfies the equation almost surely. Several theorems are proved which give conditions such that a unique random solution exists. AMS (MOS) subject classifications (1970): Primary; 60H20, 45G99. Secondary: 60G99.  相似文献   

4.
Pushing Convertible Constraints in Frequent Itemset Mining   总被引:1,自引:0,他引:1  
Recent work has highlighted the importance of the constraint-based mining paradigm in the context of frequent itemsets, associations, correlations, sequential patterns, and many other interesting patterns in large databases. Constraint pushing techniques have been developed for mining frequent patterns and associations with antimonotonic, monotonic, and succinct constraints. In this paper, we study constraints which cannot be handled with existing theory and techniques in frequent pattern mining. For example, avg(S)v, median(S)v, sum(S)v (S can contain items of arbitrary values, {<, <, , } and v is a real number.) are customarily regarded as tough constraints in that they cannot be pushed inside an algorithm such as Apriori. We develop a notion of convertible constraints and systematically analyze, classify, and characterize this class. We also develop techniques which enable them to be readily pushed deep inside the recently developed FP-growth algorithm for frequent itemset mining. Results from our detailed experiments show the effectiveness of the techniques developed.  相似文献   

5.
In terms of Groenendijk and Stokhofs (1984) formalization of exhaustive interpretation, many conversational implicatures can be accounted for. In this paper we justify and generalize this approach. Our justification proceeds by relating their account via Halpern and Moses (1984) non-monotonic theory of only knowing to the Gricean maxims of Quality and the first sub-maxim of Quantity. The approach of Groenendijk and Stokhof (1984) is generalized such that it can also account for implicatures that are triggered in subclauses not entailed by the whole complex sentence.  相似文献   

6.
Experiment 1 explored the impact of physically touching a virtual object on how realistic the virtual environment (VE) seemed to the user. Subjects in a no touch group picked up a 3D virtual image of a kitchen plate in a VE, using a traditional 3D wand. See and touch subjects physically picked up a virtual plate possessing solidity and weight, using a technique called tactile augmentation. Afterwards, subjects made predictions about the properties of other virtual objects they saw but did not interact with in the VE. See and touch subjects predicted these objects would be more solid, heavier, and more likely to obey gravity than the no touch group. In Experiment 2 (a pilot study), subjects physically bit a chocolate bar in one condition, and imagined biting a chocolate bar in another condition. Subjects rated the event more fun and realistic when allowed to physically bite the chocolate bar. Results of the two experiments converge with a growing literature showing the value of adding physical qualities to virtual objects. This study is the first to empirically demonstrate the effectiveness of tactile augmentation as a simple, safe, inexpensive technique with large freedom of motion for adding physical texture, force feedback cues, smell and taste to virtual objects. Examples of practical applications are discussed.Based in part on Physically touching virtual objects using tactile augmentation enhances the realism of virtual environments' by Hunter Hoffman which appeared in the Proceedings of the IEEE Virtual Reality Annual International Symposium '98, Atlanta GA, pp 59–63. ¢ 1998 IEEE.  相似文献   

7.
This study demonstrates an objective method used to evaluate the enhanceability of commercial software. It examines the relationship between enhancement and repair, and suggests that enhancement be considered when developing formal models of defect cause. Another definition of defect-prone software is presented that concentrates attention on software that requires unusually high repair considering the magnitude of planned enhancement.  相似文献   

8.
We study the approximation of the smallest eigenvalue of a Sturm–Liouville problem in the classical and quantum settings. We consider a univariate Sturm–Liouville eigenvalue problem with a nonnegative function q from the class C2 ([0,1]) and study the minimal number n() of function evaluations or queries that are necessary to compute an -approximation of the smallest eigenvalue. We prove that n()=(–1/2) in the (deterministic) worst case setting, and n()=(–2/5) in the randomized setting. The quantum setting offers a polynomial speedup with bit queries and an exponential speedup with power queries. Bit queries are similar to the oracle calls used in Grovers algorithm appropriately extended to real valued functions. Power queries are used for a number of problems including phase estimation. They are obtained by considering the propagator of the discretized system at a number of different time moments. They allow us to use powers of the unitary matrix exp((1/2) iM), where M is an n× n matrix obtained from the standard discretization of the Sturm–Liouville differential operator. The quantum implementation of power queries by a number of elementary quantum gates that is polylog in n is an open issue. In particular, we show how to compute an -approximation with probability (3/4) using n()=(–1/3) bit queries. For power queries, we use the phase estimation algorithm as a basic tool and present the algorithm that solves the problem using n()=(log –1) power queries, log 2–1 quantum operations, and (3/2) log –1 quantum bits. We also prove that the minimal number of qubits needed for this problem (regardless of the kind of queries used) is at least roughly (1/2) log –1. The lower bound on the number of quantum queries is proven in Bessen (in preparation). We derive a formula that relates the Sturm–Liouville eigenvalue problem to a weighted integration problem. Many computational problems may be recast as this weighted integration problem, which allows us to solve them with a polylog number of power queries. Examples include Grovers search, the approximation of the Boolean mean, NP-complete problems, and many multivariate integration problems. In this paper we only provide the relationship formula. The implications are covered in a forthcoming paper (in preparation).PACS: 03.67.Lx, 02.60.-x.  相似文献   

9.
This paper explores some aspects of the algebraic theory of mathematical morphology from the viewpoints of minimax algebra and translation-invariant systems and extends them to a more general algebraic structure that includes generalized Minkowski operators and lattice fuzzy image operators. This algebraic structure is based on signal spaces that combine the sup-inf lattice structure with a scalar semi-ring arithmetic that possesses generalized additions and -multiplications. A unified analysis is developed for: (i) representations of translation-invariant operators compatible with these generalized algebraic structures as nonlinear sup- convolutions, and (ii) kernel representations of increasing translation-invariant operators as suprema of erosion-like nonlinear convolutions by kernel elements. The theoretical results of this paper develop foundations for unifying large classes of nonlinear translation-invariant image and signal processing systems of the max or min type. The envisioned applications lie in the broad intersection of mathematical morphology, minimax signal algebra and fuzzy logic.Petros Maragos received the Diploma degree in electrical engineering from the National Technical University of Athens in 1980, and the M.Sc.E.E. and Ph.D. degrees from Georgia Tech, Atlanta, USA, in 1982 and 1985.In 1985 he joined the faculty of the Division of Applied Sciences at Harvard University, Cambridge, Massachusetts, where heworked for 8 years as professor of electrical engineering, affiliated with the interdisciplinary Harvard Robotics Lab. He has also been a consultant to several industry research groups including Xeroxs research on document image analysis. In 1993, he joined the faculty of the School of Electrical and Computer Engineering at Georgia Tech. During parts of 1996-98 he was on academic leave working as a senior researcher at the Institute for Language and Speech Processing in Athens. In 1998, he joined the faculty of the National Technical University of Athens where he is currently working as professor of electrical and computer engineering. His current research and teaching interests include the general areas of signal processing, systems theory, pattern recognition, and their applications to image processing and computer vision, and computer speech processing and recognition.He has served as associate editor for the IEEE Trans. on Acoustics, Speech, and Signal Processing, editorial board member for the Journal of Visual Communications and Image Representation, and guest editor for the IEEE Trans. on Image Processing; general chairman for the 1992 SPIE Conference on Visual Communications and Image Processing, and co-chairman for the 1996 International Symposium on Mathematical Morphology; member of two IEEE DSP committees; and president of the International Society for Mathematical Morphology.Dr. Maragos research work has received several awards, including: a 1987 US National Science Foundation Presidential Young Investigator Award; the 1988 IEEE Signal Processing Societys Paper Award for the paper Morphological Filters; the 1994 IEEE Signal Processing Societys Senior Award and the 1995 IEEE Baker Award for the paper Energy Separation in Signal Modulations with Application to Speech Analysis; and the 1996 Pattern Recognition Societys Honorable Mention Award for the paper Min-Max Classifiers. In 1995, he was elected Fellow of IEEE for his contributions to the theory and applications of nonlinear signal processing systems.  相似文献   

10.
I discuss the attitude of Jewish law sources from the 2nd–:5th centuries to the imprecision of measurement. I review a problem that the Talmud refers to, somewhat obscurely, as impossible reduction. This problem arises when a legal rule specifies an object by referring to a maximized (or minimized) measurement function, e.g., when a rule applies to the largest part of a divided whole, or to the first incidence that occurs, etc. A problem that is often mentioned is whether there might be hypothetical situations involving more than one maximal (or minimal) value of the relevant measurement and, given such situations, what is the pertinent legal rule. Presumption of simultaneous occurrences or equally measured values are also a source of embarrassment to modern legal systems, in situations exemplified in the paper, where law determines a preference based on measured values. I contend that the Talmudic sources discussing the problem of impossible reduction were guided by primitive insights compatible with fuzzy logic presentation of the inevitable uncertainty involved in measurement. I maintain that fuzzy models of data are compatible with a positivistic epistemology, which refuses to assume any precision in the extra-conscious world that may not be captured by observation and measurement. I therefore propose this view as the preferred interpretation of the Talmudic notion of impossible reduction. Attributing a fuzzy world view to the Talmudic authorities is meant not only to increase our understanding of the Talmud but, in so doing, also to demonstrate that fuzzy notions are entrenched in our practical reasoning. If Talmudic sages did indeed conceive the results of measurements in terms of fuzzy numbers, then equality between the results of measurements had to be more complicated than crisp equations. The problem of impossible reduction could lie in fuzzy sets with an empty core or whose membership functions were only partly congruent. Reduction is impossible may thus be reconstructed as there is no core to the intersection of two measures. I describe Dirichlet maps for fuzzy measurements of distance as a rough partition of the universe, where for any region A there may be a non-empty set of - _A (upper approximation minus lower approximation), where the problem of impossible reduction applies. This model may easily be combined with probabilistic extention. The possibility of adopting practical decision standards based on -cuts (and therefore applying interval analysis to fuzzy equations) is discussed in this context. I propose to characterize the uncertainty that was presumably capped by the old sages as U-uncertainty, defined, for a non-empty fuzzy set A on the set of real numbers, whose -cuts are intervals of real numbers, as U(A) = 1/h(A) 0 h(A) log [1+(A)]d, where h(A) is the largest membership value obtained by any element of A and (A) is the measure of the -cut of A defined by the Lebesge integral of its characteristic function.  相似文献   

11.
This paper aims to provide a basis for renewed talk about use in computing. Four current discourse arenas are described. Different intentions manifest in each arena are linked to failures in translation, different terminologies crossing disciplinary and national boundaries non-reflexively. Analysis of transnational use discourse dynamics shows much miscommunication. Conflicts like that between the Scandinavian System Development School and the usability approach have less current salience. Renewing our talk about use is essential to a participatory politics of information technology and will lead to clearer perception of the implications of letting new systems becoming primary media of social interaction.  相似文献   

12.
In this paper we deepen Mundici's analysis on reducibility of the decision problem from infinite-valued ukasiewicz logic to a suitable m-valued ukasiewicz logic m , where m only depends on the length of the formulas to be proved. Using geometrical arguments we find a better upper bound for the least integer m such that a formula is valid in if and only if it is also valid in m. We also reduce the notion of logical consequence in to the same notion in a suitable finite set of finite-valued ukasiewicz logics. Finally, we define an analytic and internal sequent calculus for infinite-valued ukasiewicz logic.  相似文献   

13.
This position paper argues that extending the CSP model to a richer set of tasks such as, constraint optimization, probabilistic inference and decision theoretic tasks can be done within a unifying framework called bucket elimination. The framework allows uniform hybrids for combining elimination and conditioning guided by the problem's structure and for explicating the tradeoffs between space and time and between time and accuracy.  相似文献   

14.
We formalize natural deduction for first-order logic in the proof assistant Coq, using de Bruijn indices for variable binding. The main judgment we model is of the form d[:], stating that d is a proof term of formula under hypotheses it can be viewed as a typing relation by the Curry–Howard isomorphism. This relation is proved sound with respect to Coq's native logic and is amenable to the manipulation of formulas and of derivations. As an illustration, we define a reduction relation on proof terms with permutative conversions and prove the property of subject reduction.  相似文献   

15.
The notion of obvious inference in predicate logic is discussed from the viewpoint of proof-checker applications in logic and mathematics education. A class of inferences in predicate logic is defined and it is proposed to identify it with the class of obvious logical inferences. The definition is compared with other approaches. The algorithm for implementing the obviousness decision procedure follows directly from the definition.  相似文献   

16.
In this paper we discuss a view of the Machine Learning technique called Explanation-Based Learning (EBL) or Explanation-Based Generalization (EBG) as a process for the interpretation of vague concepts in logic-based models of law.The open-textured nature of legal terms is a well-known open problem in the building of knowledge-based legal systems. EBG is a technique which creates generalizations of given examples on the basis of background domain knowledge. We relate these two topics by considering EBG's domain knowledge as corresponding to statute law rules, and EBG's training example as corresponding to a precedent case.By making the interpretation of vague predicates as guided by precedent cases, we use EBG as an effective process capable of creating a link between predicates appearing as open-textured concepts in law rules, and predicates appearing as ordinary language wording for stating the facts of a case.Standard EBG algorithms do not change the deductive closure of the domain theory. In the legal context, this is only adequate when concepts vaguely defined in some law rules can be reformulated in terms of other concepts more precisely defined in other rules. We call theory reformulation the process adopted in this situation of complete knowledge.In many cases, however, statutory law leaves some concepts completely undefined. We then propose extensions to the EBG standard that deal with this situation of incomplete knowledge, and call theory revision the extended process. In order to fill in knowledge gaps we consider precedent cases supplemented by additional heuristic information. The extensions proposed treat heuristics represented by abstraction hierarchies with constraints and exceptions.In the paper we also precisely characterize the distinction between theory reformulation and theory revision by stating formal definitions and results, in the context of the Logic Programming theory.We offer this proposal as a possible contribution to cross fertilization between machine learning and legal reasoning methods.  相似文献   

17.
Optimal shape design problems for an elastic body made from physically nonlinear material are presented. Sensitivity analysis is done by differentiating the discrete equations of equilibrium. Numerical examples are included.Notation U ad set of admissible continuous design parameters - U h ad set of admissible discrete design parameters - function fromU h ad defining shape of body - h function fromU h ad defining approximated shape of body - vector of nodal values of h - { n} sequence of functions tending to - () domain defined by - K bulk modulus - shear modulus - penalty parameter for contact condition - V() space of virtual displacements in() - V h(h) finite element approximation ofV() - J cost functional - J h discretized cost functional - J algebraic form ofJ h - (u) stress tensor - e(u) strain tensor - K stiffness matrix - f force vector - b(q) term arising from nonlinear boundary conditions - q vector of nodal degrees of freedom - p vector of adjoint state variables - J Jacobian of isoparametric mapping - |J| determinant ofJ - N vector of shape function values on parent element - L matrix of shape function derivatives on parent element - G matrix of Cartesian derivatives of shape functions - X matrix of nodal coordinates of element - D matrix of elastic coefficients - B strain-displacement matrix - P part of boundary where tractions are prescribed - u part of boundary where displacements are prescribed - variable part of boundary - strain invariant  相似文献   

18.
In the world of OTIS, an online Internet School for occupational therapists, students from four European countries were encouraged to work collaboratively through problem-based learning by interacting with each other in a virtual semi-immersive environment. This paper describes, often in their own words, the experience of European occupational therapy students working together across national and cultural boundaries. Collaboration and teamwork were facilitated exclusively through an online environment, since the students never met each other physically during the OTIS pilot course. The aim of the paper is to explore the observations that (1) there was little interaction between students from different tutorial groups and (2) virtual teamwork developed in each of the cross-cultural tutorial groups. Synchronous data from the students was captured during tutorial sessions and peer-booked meetings and analyzed using the qualitative constructs of immersion, presence and reflection in learning. The findings indicate that immersion was experienced only to a certain extent. However, students found both presence and shared presence, within their tutorial groups, to help collaboration and teamwork. Other evidence suggests that communities of interest were established. Further study is proposed to support group work in an online learning environment. It is possible to conclude that collaborative systems can be designed, which encourage students to build trust and teamwork in a cross cultural online learning environment.This revised version was published online in March 2005 with corrections to the cover dateFunded by the European Union through the TENTelecom programme.  相似文献   

19.
This paper describes a unified variational theory for design sensitivity analysis of nonlinear dynamic response of structural and mechanical systems for shape, nonshape, material and mechanical properties selection, as well as control problems. The concept of an adjoint system, the principle of virtual work and a Lagrangian-Eulerian formulation to describe the deformations and the design variations are used to develop a unified view point. A general formula for design sensitivity analysis is derived and interpreted for usual performance functionals. Analytical examples are utilized to demonstrate the use of the theory and give insights for application to more complex problems that must be treated numerically.Derivatives The comma notation for partial derivatives is used, i.e. G,u = G/u. An upper dot represents material time derivative, i.e. ü = 2u/t2. A prime implies derivative with respect to the time measured in the reference time-domain, i.e. u = du/d.  相似文献   

20.
This paper is devoted to the development of a knowledge-based system (KBS) called Artificial Memory, The goal of this KBS is to solve multicriteria job-shop scheduling problems. Since job-shop scheduling problems are NP-hard, it is extremely difficult to obtain optimal solutions for industrial problems. Thus, a host of heuristic algorithms, most of which are based on priority rules, have been proposed in the literature. The efficiency of these algorithms strongly depends on the criteria to be optimized as well as the values of the parameters associated with the particular instance of the scheduling problem. The basic hypothesis of the artificial memory approach is a continuity assumption: we assume that identical decisions applied to similar instances lead to similar values of the criteria. This assumption is fundamental to validate this knowledge-based system. For each criterion, the artificial memory contains a synthesis of the performances of different algorithms upon sets of similar instances. These performances are acquired using simulation. When the artificial memory is employed, the characteristic values of a new instance are computed and examined by the artificial memory system. The performances of the different algorithms for the considered criterion are estimated for the new instance and an appropriate algorithm is chosen accordingly. In order to build this KBS and to estimate the performances of algorithms upon a new instance, we use a mathematical approach. Some difficulties arose in the development of this KBS and had to be overcome: the corresponding proposed solutions are developed. The paper also presents a number of numerical experimental applications.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号