首页 | 官方网站   微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   5187篇
  免费   377篇
  国内免费   34篇
工业技术   5598篇
  2024年   15篇
  2023年   90篇
  2022年   184篇
  2021年   324篇
  2020年   274篇
  2019年   328篇
  2018年   389篇
  2017年   318篇
  2016年   360篇
  2015年   212篇
  2014年   313篇
  2013年   554篇
  2012年   322篇
  2011年   339篇
  2010年   252篇
  2009年   214篇
  2008年   148篇
  2007年   92篇
  2006年   109篇
  2005年   65篇
  2004年   67篇
  2003年   51篇
  2002年   57篇
  2001年   28篇
  2000年   29篇
  1999年   17篇
  1998年   64篇
  1997年   39篇
  1996年   43篇
  1995年   31篇
  1994年   27篇
  1993年   26篇
  1992年   25篇
  1991年   19篇
  1990年   14篇
  1989年   20篇
  1988年   13篇
  1987年   13篇
  1986年   6篇
  1985年   12篇
  1984年   7篇
  1983年   8篇
  1982年   6篇
  1981年   10篇
  1980年   11篇
  1979年   7篇
  1978年   6篇
  1977年   8篇
  1976年   13篇
  1971年   4篇
排序方式: 共有5598条查询结果,搜索用时 15 毫秒
81.
Many recent software engineering papers have examined duplicate issue reports. Thus far, duplicate reports have been considered a hindrance to developers and a drain on their resources. As a result, prior research in this area focuses on proposing automated approaches to accurately identify duplicate reports. However, there exists no studies that attempt to quantify the actual effort that is spent on identifying duplicate issue reports. In this paper, we empirically examine the effort that is needed for manually identifying duplicate reports in four open source projects, i.e., Firefox, SeaMonkey, Bugzilla and Eclipse-Platform. Our results show that: (i) More than 50 % of the duplicate reports are identified within half a day. Most of the duplicate reports are identified without any discussion and with the involvement of very few people; (ii) A classification model built using a set of factors that are extracted from duplicate issue reports classifies duplicates according to the effort that is needed to identify them with a precision of 0.60 to 0.77, a recall of 0.23 to 0.96, and an ROC area of 0.68 to 0.80; and (iii) Factors that capture the developer awareness of the duplicate issue’s peers (i.e., other duplicates of that issue) and textual similarity of a new report to prior reports are the most influential factors in our models. Our findings highlight the need for effort-aware evaluation of approaches that identify duplicate issue reports, since the identification of a considerable amount of duplicate reports (over 50 %) appear to be a relatively trivial task for developers. To better assist developers, research on identifying duplicate issue reports should put greater emphasis on assisting developers in identifying effort-consuming duplicate issues.  相似文献   
82.
83.
Reuse of software components, either closed or open source, is considered to be one of the most important best practices in software engineering, since it reduces development cost and improves software quality. However, since reused components are (by definition) generic, they need to be customized and integrated into a specific system before they can be useful. Since this integration is system-specific, the integration effort is non-negligible and increases maintenance costs, especially if more than one component needs to be integrated. This paper performs an empirical study of multi-component integration in the context of three successful open source distributions (Debian, Ubuntu and FreeBSD). Such distributions integrate thousands of open source components with an operating system kernel to deliver a coherent software product to millions of users worldwide. We empirically identified seven major integration activities performed by the maintainers of these distributions, documented how these activities are being performed by the maintainers, then evaluated and refined the identified activities with input from six maintainers of the three studied distributions. The documented activities provide a common vocabulary for component integration in open source distributions and outline a roadmap for future research on software integration.  相似文献   
84.
Applying model predictive control (MPC) in some cases such as complicated process dynamics and/or rapid sampling leads us to poorly numerically conditioned solutions and heavy computational load. Furthermore, there is always mismatch in a model that describes a real process. Therefore, in this paper in order to prevail over the mentioned difficulties, we design a robust MPC using the Laguerre orthonormal basis in order to speed up the convergence at the same time with lower computation adding an extra parameter “a” in MPC. In addition, the Kalman state estimator is included in the prediction model and accordingly the MPC design is related to the Kalman estimator parameters as well as the error of estimations which helps the controller react faster against unmeasured disturbances. Tuning the parameters of the Kalman estimator as well as MPC is another achievement of this paper which guarantees the robustness of the system against the model mismatch and measurement noise. The sensitivity function at low frequency is minimized to tune the MPC parameters since the lower the magnitude of the sensitivity function at low frequency the better command tracking and disturbance rejection results. The integral absolute error (IAE) and peak of the sensitivity are used as constraints in optimization procedure to ensure the stability and robustness of the controlled process. The performance of the controller is examined via the controlling level of a Tank and paper machine processes.  相似文献   
85.
A bipartite state is classical with respect to party A if and only if party A can perform nondisruptive local state identification (NDLID) by a projective measurement. Motivated by this we introduce a class of quantum correlation measures for an arbitrary bipartite state. The measures utilize the general Schatten p-norm to quantify the amount of departure from the necessary and sufficient condition of classicality of correlations provided by the concept of NDLID. We show that for the case of Hilbert–Schmidt norm, i.e., \(p=2\), a closed formula is available for an arbitrary bipartite state. The reliability of the proposed measures is checked from the information-theoretic perspective. Also, the monotonicity behavior of these measures under LOCC is exemplified. The results reveal that for the general pure bipartite states these measures have an upper bound which is an entanglement monotone in its own right. This enables us to introduce a new measure of entanglement, for a general bipartite state, by convex roof construction. Some examples and comparison with other quantum correlation measures are also provided.  相似文献   
86.
High-efficiency video coding is the latest standardization effort of the International Organization for Standardization and the International Telecommunication Union. This new standard adopts an exhaustive algorithm of decision based on a recursive quad-tree structured coding unit, prediction unit, and transform unit. Consequently, an important coding efficiency may be achieved. However, a significant computational complexity is resulted. To speed up the encoding process, efficient algorithms based on fast mode decision and optimized motion estimation were adopted in this paper. The aim was to reduce the complexity of the motion estimation algorithm by modifying its search pattern. Then, it was combined with a new fast mode decision algorithm to further improve the coding efficiency. Experimental results show a significant speedup in terms of encoding time and bit-rate saving with tolerable quality degradation. In fact, the proposed algorithm permits a main reduction that can reach up to 75 % in encoding time. This improvement is accompanied with an average PSNR loss of 0.12 dB and a decrease by 0.5 % in terms of bit-rate.  相似文献   
87.
88.
89.
We propose novel techniques to find the optimal achieve the maximum loss reduction for distribution networks location, size, and power factor of distributed generation (DG) to Determining the optimal DG location and size is achieved simultaneously using the energy loss curves technique for a pre-selected power factor that gives the best DG operation. Based on the network's total load demand, four DG sizes are selected. They are used to form energy loss curves for each bus and then for determining the optimal DG options. The study shows that by defining the energy loss minimization as the objective function, the time-varying load demand significantly affects the sizing of DG resources in distribution networks, whereas consideration of power loss as the objective function leads to inconsistent interpretation of loss reduction and other calculations. The devised technique was tested on two test distribution systems of varying size and complexity and validated by comparison with the exhaustive iterative method (EIM) and recently published results. Results showed that the proposed technique can provide an optimal solution with less computation.  相似文献   
90.
Bug fixing accounts for a large amount of the software maintenance resources. Generally, bugs are reported, fixed, verified and closed. However, in some cases bugs have to be re-opened. Re-opened bugs increase maintenance costs, degrade the overall user-perceived quality of the software and lead to unnecessary rework by busy practitioners. In this paper, we study and predict re-opened bugs through a case study on three large open source projects—namely Eclipse, Apache and OpenOffice. We structure our study along four dimensions: (1) the work habits dimension (e.g., the weekday on which the bug was initially closed), (2) the bug report dimension (e.g., the component in which the bug was found) (3) the bug fix dimension (e.g., the amount of time it took to perform the initial fix) and (4) the team dimension (e.g., the experience of the bug fixer). We build decision trees using the aforementioned factors that aim to predict re-opened bugs. We perform top node analysis to determine which factors are the most important indicators of whether or not a bug will be re-opened. Our study shows that the comment text and last status of the bug when it is initially closed are the most important factors related to whether or not a bug will be re-opened. Using a combination of these dimensions, we can build explainable prediction models that can achieve a precision between 52.1–78.6 % and a recall in the range of 70.5–94.1 % when predicting whether a bug will be re-opened. We find that the factors that best indicate which bugs might be re-opened vary based on the project. The comment text is the most important factor for the Eclipse and OpenOffice projects, while the last status is the most important one for Apache. These factors should be closely examined in order to reduce maintenance cost due to re-opened bugs.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号