Microsystem Technologies - Contact resistance is the main parameter used for assessing the high cycling reliability of RF microelectromechanical (RF-MEMS) switches. In this paper the use of a... 相似文献
This paper analyzes the rapid and unexpected rise of deep learning within Artificial Intelligence and its applications. It tackles the possible reasons for this remarkable success, providing candidate paths towards a satisfactory explanation of why it works so well, at least in some domains. A historical account is given for the ups and downs, which have characterized neural networks research and its evolution from “shallow” to “deep” learning architectures. A precise account of “success” is given, in order to sieve out aspects pertaining to marketing or sociology of research, and the remaining aspects seem to certify a genuine value of deep learning, calling for explanation. The alleged two main propelling factors for deep learning, namely computing hardware performance and neuroscience findings, are scrutinized, and evaluated as relevant but insufficient for a comprehensive explanation. We review various attempts that have been made to provide mathematical foundations able to justify the efficiency of deep learning, and we deem this is the most promising road to follow, even if the current achievements are too scattered and relevant for very limited classes of deep neural models. The authors’ take is that most of what can explain the very nature of why deep learning works at all and even very well across so many domains of application is still to be understood and further research, which addresses the theoretical foundation of artificial learning, is still very much needed.
Inclusion between XML types is important but expensive, and is much more expensive when unordered types are considered. We prove here that inclusion for XML types with interleaving and counting can be decided in polynomial time in the presence of two important restrictions: no element appears twice in the same content model, and Kleene star is only applied to disjunctions of single elements. 相似文献
Web site-evaluation methodologies and validation engines take the view that all accessibility guidelines must be met to gain
compliance. Problems exist in this regard, as contradictions within the rule set may arise, and the type of impairment or
its severity is not isolated. The Barrier Walkthrough (BW) method goes someway to addressing these issues, by enabling barrier
types derived from guidelines to be applied to different user categories such as motor or visual impairment, etc. However,
the problem remains of combinatorial explosion of possibilities when one has to consider users with multiple disabilities.
In this paper, a simple set theory operation is used to create a validation scheme for older users by aggregating barrier
types specific to motor impaired and low-vision users, thereby creating a new “older users” category from the results of this
set union. To evaluate the feasibility and validity of this aggregation approach, two BW experiments were conducted. The first
experiment evaluated the aggregated results by focusing on quality attributes and showed that aggregation generates data whose
quality is comparable to the original one. However, this first experiment could not test for validity, as the older users
category was not included. To remedy this deficiency, another BW experiment was conducted with expert judges who evaluated
a web page in the context of older users. In this second experiment, it was found that there is no significant difference
between the aggregated and the manually evaluated (by experts) barrier scores, and that the same barriers are identified using
experts and aggregation, even though there are differences in how severity scores are distributed. From these results, it
is concluded that the aggregation of barriers is a viable alternative to expert evaluation, when the target of that aggregation
could not be evaluated manually or it would not be feasible to do so. It is also argued that aggregation is a technique that
can be used in combination with other evaluation methods, like user testing or subjective assessments. 相似文献
Cognitive therapy and experiential dynamic therapy show quite many similarities but they diverge in their initial approach to the patient (aiming respectively at cognitions and at emotions) and in their assumptions about core pathogenetic processes. According to cognitive therapy patients suffer because of a negative unrealistic inner representation of self and world, whereas for experiential dynamic therapy problems arise from conflictual experience and expression of healthy feelings and needs. A synthetic model of the pathogenetic core process, embracing both a conflict about healthy needs and emotions, and a negative self-image, is outlined and discussed. In particular, the model's congruence with new knowledge emerging from infant and attachment research, emotion theory, and cognitive neurosciences is illustrated. Assuming an identity of their basic pathogenetic theory, the two therapies can be thought of as two initially different approaches, the one focusing more on cognitions, the other on emotions, but converging toward the change of a common pathogenetic core. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
Solid-state nanopores have been gaining popularity in nano-biotechnology for single molecule detection, in particular for label-free high-throughput DNA sequencing. In order to address the improvement of the resolution/speed trade-off critical in this application, here we present a new two-channel current amplifier tailored for solid-state nanopore devices with integrated tunneling electrodes. The simultaneous detection of ion and tunneling currents provides enhanced molecule tracking capability. We describe the system design starting from a detailed noise analysis and device modeling, highlighting the detrimental role of the conductive silicon substrate and of all the stray capacitive couplings between the electrodes. Given the high input capacitance (0.1–1 nF), the input voltage noise has been carefully minimized choosing a discrete couple of matched low-noise JFETs as input stage, thus achieving an equivalent input noise of 1.5 nV/√Hz (corresponding to a current noise floor of 15 fA/√Hz at 10 kHz). Low-noise performance (11 pA rms noise integrated over a 75 kHz bandwidth) is preserved at a wide bandwidth (300 kHz) and high gain (100 MΩ) thanks to the adoption of an improved integrator/differentiator cascade topology. Furthermore, along with biasing networks and selectable low-pass filters, an AC-coupled channel providing additional gain has been introduced in order to “zoom” in the current signature during pore blockade events. Together with an experimental characterization of the system (and comparison with the noise performance of other instruments), the platform is validated by demonstrating the detection of λ-DNA with 20 nm pores. 相似文献
In this paper, we address the problem of creating an objective benchmark for evaluating SLAM approaches. We propose a framework
for analyzing the results of a SLAM approach based on a metric for measuring the error of the corrected trajectory. This metric
uses only relative relations between poses and does not rely on a global reference frame. This overcomes serious shortcomings
of approaches using a global reference frame to compute the error. Our method furthermore allows us to compare SLAM approaches
that use different estimation techniques or different sensor modalities since all computations are made based on the corrected
trajectory of the robot.
We provide sets of relative relations needed to compute our metric for an extensive set of datasets frequently used in the
robotics community. The relations have been obtained by manually matching laser-range observations to avoid the errors caused
by matching algorithms. Our benchmark framework allows the user to easily analyze and objectively compare different SLAM approaches. 相似文献