首页 | 官方网站   微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   316篇
  免费   5篇
工业技术   321篇
  2023年   1篇
  2022年   7篇
  2021年   5篇
  2020年   5篇
  2019年   6篇
  2018年   9篇
  2017年   8篇
  2016年   19篇
  2015年   6篇
  2014年   12篇
  2013年   27篇
  2012年   15篇
  2011年   34篇
  2010年   14篇
  2009年   8篇
  2008年   18篇
  2007年   11篇
  2006年   15篇
  2005年   6篇
  2004年   11篇
  2003年   14篇
  2002年   9篇
  2001年   6篇
  2000年   5篇
  1999年   6篇
  1998年   2篇
  1997年   7篇
  1996年   5篇
  1995年   5篇
  1994年   2篇
  1993年   3篇
  1992年   3篇
  1991年   2篇
  1990年   5篇
  1989年   1篇
  1988年   1篇
  1987年   1篇
  1985年   1篇
  1980年   1篇
  1977年   1篇
  1974年   1篇
  1973年   1篇
  1967年   1篇
  1959年   1篇
排序方式: 共有321条查询结果,搜索用时 15 毫秒
1.
2.
3.
Designing controllers with diagnostic capabilities is important as in a feedback control system, detection and isolation of failures is generally affected by the particular control law used. Therefore, a common approach to control and failure diagnosis problems has significant merit. Controllers capable of performing failure diagnosis have additional diagnostic outputs to detect and isolate sensor and actuator faults. A linear such controller is usually called a four-parameter controller. Neural networks have proved to be a very powerful tool in the control systems area, where they have been used in the modelling and control of dynamical systems. In this paper, a neural network model of a controller with diagnostic capabilities (CDC) is presented for the first time. This nonlinear neural controller is trained to operate as a traditional controller, while at the same time it provides reproduction of the failure occurring either at the actuator or the sensor. The cases of actuator and sensor failure are studied independently. The validity of the results is verified by extensive simulations.A version of this paper under the title The Four-Parameter Controller. A Neural Network Implementation was presented at the IEEE Mediterranean Symposium on New Directions in Control Theory and Applications, Chania, Crete, Greece, June 21–23, 1993.  相似文献   
4.
Today, the publication of microdata poses a privacy threat: anonymous personal records can be re-identified using third data sources. Past research has tried to develop a concept of privacy guarantee that an anonymized data set should satisfy before publication, culminating in the notion of t-closeness. To satisfy t-closeness, the records in a data set need to be grouped into Equivalence Classes (ECs), such that each EC contains records of indistinguishable quasi-identifier values, and its local distribution of sensitive attribute (SA) values conforms to the global table distribution of SA values. However, despite this progress, previous research has not offered an anonymization algorithm tailored for t-closeness. In this paper, we cover this gap with SABRE, a SA Bucketization and REdistribution framework for t-closeness. SABRE first greedily partitions a table into buckets of similar SA values and then redistributes the tuples of each bucket into dynamically determined ECs. This approach is facilitated by a property of the Earth Mover’s Distance (EMD) that we employ as a measure of distribution closeness: If the tuples in an EC are picked proportionally to the sizes of the buckets they hail from, then the EMD of that EC is tightly upper-bounded using localized upper bounds derived for each bucket. We prove that if the t-closeness constraint is properly obeyed during partitioning, then it is obeyed by the derived ECs too. We develop two instantiations of SABRE and extend it to a streaming environment. Our extensive experimental evaluation demonstrates that SABRE achieves information quality superior to schemes that merely applied algorithms tailored for other models to t-closeness, and can be much faster as well.  相似文献   
5.
Market-based principles can be used to manage the risk of distributed peer-to-peer transactions. This is demonstrated by Ptrim, a system that builds a transaction default market on top of a main transaction processing system, within which peers offer to underwrite the transaction risk for a slight increase in the transaction cost. The insurance cost, determined through market-based mechanisms, is a way of identifying untrustworthy peers and perilous transactions. The risk of the transactions is contained, and at the same time members of the peer-to-peer network capitalise on their market knowledge by profiting as transaction insurers. We evaluated the approach through trials with the deployed Ptrim prototype, as well as composite experiments involving real online transaction data and real subjects participating in the transaction default market. We examine the efficacy of our approach both from a theoretical and an experimental perspective. Our findings suggest that the Ptrim market layer functions in an efficient manner, and is able to support the transaction processing system through the insurance offers it produces, thus acting as an effective means of reducing the risk of peer-to-peer transactions. In our conclusions we discuss how a system like Ptrim assimilates properties of real world markets, and its potential exposure and possible countermeasures to events such as those witnessed in the recent global financial turmoil.  相似文献   
6.
The field of computer vision has experienced rapid growth over the past 50 years. Many computer vision problems have been solved using theory and ideas from algebraic projective geometry. In this paper, we look at a previously unsolved problem from object recognition, namely object recognition when the correspondences between the object and image data are not known a priori. We formulate this problem as a mixed‐integer non‐linear optimization problem in terms of the unknown projection relating the object and image, as well as the unknown assignments of object points and lines to those in the image. The global optimum of this problem recovers the relationship between the object points and lines with those in the image. When certain assumptions are enforced on the allowable projections mapping the object into the image, a proof is provided which permits one to solve the optimization problem via a simple decomposition. We illustrate this decomposition approach on some example scenarios.  相似文献   
7.
In this paper, we present an innovative framework for efficiently monitoring Wireless Sensor Networks (WSNs). Our framework, coined KSpot, utilizes a novel top-k query processing algorithm we developed, in conjunction with the concept of in-network views, in order to minimize the cost of query execution. For ease of exposition, consider a set of sensors acquiring data from their environment at a given time instance. The generated information can conceptually be thought as a horizontally fragmented base relation R. Furthermore, the results to a user-defined query Q, registered at some sink point, can conceptually be thought as a view V. Maintaining consistency between V and R is very expensive in terms of communication and energy. Thus, KSpot focuses on a subset V??(?V) that unveils only the k highest-ranked answers at the sink, for some user defined parameter k. To illustrate the efficiency of our framework, we have implemented a real system in nesC, which combines the traditional advantages of declarative acquisition frameworks, like TinyDB, with the ideas presented in this work. Extensive real-world testing and experimentation with traces from UC-Berkeley, the University of Washington and Intel Research Berkeley, show that KSpot provides an up to 66% of energy savings compared to TinyDB, minimizes both the size and number of packets transmitted over the network (up to 77%), and prolongs the longevity of a WSN deployment to new scales.  相似文献   
8.
9.
10.

Saliency prediction models provide a probabilistic map of relative likelihood of an image or video region to attract the attention of the human visual system. Over the past decade, many computational saliency prediction models have been proposed for 2D images and videos. Considering that the human visual system has evolved in a natural 3D environment, it is only natural to want to design visual attention models for 3D content. Existing monocular saliency models are not able to accurately predict the attentive regions when applied to 3D image/video content, as they do not incorporate depth information. This paper explores stereoscopic video saliency prediction by exploiting both low-level attributes such as brightness, color, texture, orientation, motion, and depth, as well as high-level cues such as face, person, vehicle, animal, text, and horizon. Our model starts with a rough segmentation and quantifies several intuitive observations such as the effects of visual discomfort level, depth abruptness, motion acceleration, elements of surprise, size and compactness of the salient regions, and emphasizing only a few salient objects in a scene. A new fovea-based model of spatial distance between the image regions is adopted for considering local and global feature calculations. To efficiently fuse the conspicuity maps generated by our method to one single saliency map that is highly correlated with the eye-fixation data, a random forest based algorithm is utilized. The performance of the proposed saliency model is evaluated against the results of an eye-tracking experiment, which involved 24 subjects and an in-house database of 61 captured stereoscopic videos. Our stereo video database as well as the eye-tracking data are publicly available along with this paper. Experiment results show that the proposed saliency prediction method achieves competitive performance compared to the state-of-the-art approaches.

  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号