首页 | 官方网站   微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   214篇
  免费   10篇
  国内免费   9篇
工业技术   233篇
  2023年   21篇
  2022年   22篇
  2021年   25篇
  2020年   19篇
  2019年   18篇
  2018年   10篇
  2017年   12篇
  2016年   15篇
  2015年   11篇
  2014年   23篇
  2013年   11篇
  2012年   2篇
  2011年   10篇
  2010年   5篇
  2009年   8篇
  2008年   7篇
  2007年   4篇
  2006年   2篇
  2004年   1篇
  2003年   1篇
  2002年   1篇
  2000年   2篇
  1994年   1篇
  1992年   1篇
  1986年   1篇
排序方式: 共有233条查询结果,搜索用时 15 毫秒
51.
随着环保要求的不断提高,城市集中供暖小锅炉被逐步关停,并被接入城市主干网,热网不断扩张。与此同时,热量的生产也运用地热、太阳能、工业余热、电热等多种热源,使得集中供热系统变得更加复杂。靠传统手工运算方式、或者理想机理建模方式较难对热网的结构设计及运行进行科学优化,需要通过计算机仿真建模的手段,并结合实际热网运行的数据对热网进行阻力特性辨识,才能真正起到有效的作用。本文研究了基于数据驱动与机理模型融合的集中供热网水力平衡分析模型,并利用来自热网SCADA运行数据通过多种机器学习算法对先验知识模型的参数进行学习优化,最终建立与真实热网相匹配的水力分析模型,此种方法可为热力企业的热网结构优化改造、经济运行提供技术参考。  相似文献   
52.
Effective upkeep of aging infrastructure systems with limited funding and resources calls for efficient bridge management systems. Although data-driven models have been extensively studied in the last decade for extracting knowledge from past experience to guide future maintenance decision making, their performance and usefulness have been limited by the level of detail and accuracy of currently available bridge condition databases. This paper leverages an untapped resource for bridge condition data and proposes a new method to extract condition information from it at a high level of detail. To that end, a natural language processing approach was developed to formalize structural condition knowledge by formulating a sequence labeling task and modeling inspection narratives as a combination of words representing defects, their severity and location, while accounting for the context of each word. The proposed framework employs a deep-learning-based approach and incorporates context-aware components including a bi-directional Long Short Term Memory (LSTM) neural network architecture and a Conditional Random Field (CRF) classifier to account for the context of words when assigning labels. A dependency-based word embedding model was also used to represent the raw text while incorporating both semantic and contextual information. The sequence labeling model was trained using bridge inspection reports collected from the Virginia Department of Transportation bridge inspection database and achieved an F1 score of 94.12% during testing. The proposed model also demonstrated improvements compared with baseline sequence labeling models, and was further used to demonstrate the capability of detecting condition changes with respect to previous inspection records. Results of this study show that the proposed method can be used to extract and create a condition information database that can further assist in developing data-driven bridge management and condition forecasting models, as well as automated bridge inspection systems.  相似文献   
53.
With the rapid development and implementation of ICT, academics and industrial practitioners are widely applying robotic process automation (RPA) to enhance their business processes and operational efficiencies. This paper intends to address the value creation of utilizing RPA under the cloud-based Cyber-Physical Systems (CPS) in Robotic Mobile Fulfillment System (RMFS). By providing a TO-BE analysis of RPA and cloud-based CPS framework, a data-driven approach is proposed for zone clustering and storage location assignment classification in RMFS. The purpose of the paper is to gain better operational efficiency in RMFS. A modified A* algorithm is adopted for calculating the total traveling cost of each moveable rack in the case company layout. Nine common clustering algorithms are applied for the RMFS’s zone clustering. The results from the zone clustering are considered as nine scenarios for data-driven order classification to solve the storage location assignment problem. Six common classification algorithms are applied for a detailed comparison which has been conducted with thousands of orders. The results reveal that K-means, Gaussian Mixture Models, and Bayesian Gaussian Mixture Model are worked well with six supervised classification algorithms which yield an average of 95% accuracy rate and a higher customers’ expectation can be achieved under the customer-driven e-commerce economy.  相似文献   
54.
Identifying, and eventually eliminating throughput bottlenecks, is a key means to increase throughput and productivity in production systems. In the real world, however, eliminating throughput bottlenecks is a challenge. This is due to the landscape of complex factory dynamics, with several hundred machines operating at any given time. Academic researchers have tried to develop tools to help identify and eliminate throughput bottlenecks. Historically, research efforts have focused on developing analytical and discrete event simulation modelling approaches to identify throughput bottlenecks in production systems. However, with the rise of industrial digitalisation and artificial intelligence (AI), academic researchers explored different ways in which AI might be used to eliminate throughput bottlenecks, based on the vast amounts of digital shop floor data. By conducting a systematic literature review, this paper aims to present state-of-the-art research efforts into the use of AI for throughput bottleneck analysis. To make the work of the academic AI solutions more accessible to practitioners, the research efforts are classified into four categories: (1) identify, (2) diagnose, (3) predict and (4) prescribe. This was inspired by real-world throughput bottleneck management practice. The categories, identify and diagnose focus on analysing historical throughput bottlenecks, whereas predict and prescribe focus on analysing future throughput bottlenecks. This paper also provides future research topics and practical recommendations which may help to further push the boundaries of the theoretical and practical use of AI in throughput bottleneck analysis.  相似文献   
55.
Accurate cutting force prediction serves as an important reference to the optimization of numerically controlled machining process. Traditional cutting force modeling via theoretical cutting mechanism hampers accurate prediction for actual machining process due to its highly suppressed modeling flexibility. On the other hand, machine learning based modeling approaches demand large amount of diversified labeled samples to achieve comparable prediction results, while collecting these samples can be tedious and costly because the cutter workpiece engagement (CWE) keeps changing during actual process. This paper presents a cutting force prediction model, named ForceNet, which incorporates elementary physical priori into structured neural networks to predict cutting force for end-milling process of complex CWE. The main idea is to use grayscale images to represent CWE geometry, providing a universal input to the ForceNet. Unlike traditional deep neural networks served as an unexplainable black box, the core of the ForceNet is constructed by the vector summation of directional primitive cutting force elements, which are approximated using elementary neural networks. Preliminary results indicate that ForceNet outperformed existing methods not only with greater prediction accuracy in unseen cutting situations, but also with less training data needed thanks to its inherent neuro-physical structure.  相似文献   
56.
57.
Contraction theory is an analytical tool to study differential dynamics of a non-autonomous (i.e., time-varying) nonlinear system under a contraction metric defined with a uniformly positive definite matrix, the existence of which results in a necessary and sufficient characterization of incremental exponential stability of multiple solution trajectories with respect to each other. By using a squared differential length as a Lyapunov-like function, its nonlinear stability analysis boils down to finding a suitable contraction metric that satisfies a stability condition expressed as a linear matrix inequality, indicating that many parallels can be drawn between well-known linear systems theory and contraction theory for nonlinear systems. Furthermore, contraction theory takes advantage of a superior robustness property of exponential stability used in conjunction with the comparison lemma. This yields much-needed safety and stability guarantees for neural network-based control and estimation schemes, without resorting to a more involved method of using uniform asymptotic stability for input-to-state stability. Such distinctive features permit systematic construction of a contraction metric via convex optimization, thereby obtaining an explicit exponential bound on the distance between a time-varying target trajectory and solution trajectories perturbed externally due to disturbances and learning errors. The objective of this paper is therefore to present a tutorial overview of contraction theory and its advantages in nonlinear stability analysis of deterministic and stochastic systems, with an emphasis on deriving formal robustness and stability guarantees for various learning-based and data-driven automatic control methods. In particular, we provide a detailed review of techniques for finding contraction metrics and associated control and estimation laws using deep neural networks.  相似文献   
58.
赵彦钧  王国胤  胡峰 《计算机科学》2008,35(11):174-177
可变精度粗糙集理论是经典粗糙集理论的一种扩展理论。它通过引入噪音阈值β,增强了对噪音数据的适应性。然而噪音阂值口多是人为设定,这要求有一定先验知识。提出一种方法,完成了数据驱动的噪音阈值β的自主式获取。仿真实验结果表明,按照此方法获取的噪音阂值β能够提高可变精度粗糙集理论获取知识的性能。  相似文献   
59.
A constrained optimal ILC for a class of nonlinear and non-affine systems, without requiring any explicit model information except for the input and output data, is proposed in this work. In order to address the nonlinearities, an iterative dynamic linearization method without omitting any information of the original plant is introduced in the iteration direction. The derived linearized data model is equivalent to the original nonlinear system and reflects the real-time dynamics of the controlled plant, rather than a static approximate model. By transferring all the constraints on the system output, control input, and the change rate of input signals into a linear matrix inequality, a novel constrained data-driven optimal ILC is developed by minimizing a predesigned objective function. The optimal learning gain is unfixed and updated iteratively according to the input and output measurements, which enhances the flexibility regarding modifications and expansions of the controlled plant. The results are further extended to the point-to-point control tasks where the exact tracking performance is required only at certain points and a constrained data-driven optimal point-to-point ILC is proposed by only utilizing the error measurements at the specified points only.  相似文献   
60.
This paper discusses some limitations of the weighted recursive PCA algorithm (WARP) proposed by Portnoy, Melendez, Pinzon, and Sanjuan (2016) which is used for fault detection (FD) by arguing that it can reduce false alarms. The motivation of these comments is the lack of a clear criterion in the WARP algorithm to distinguish between process deviations and faults' scenarios, and as a consequence, the applicability of this algorithm is questionable from the FD point of view. Moreover, we address the absence of a formal justification why the computational complexity achieved by using the WARP algorithm is reduced in comparison with methods discussed in the paper.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号