首页 | 官方网站   微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   9054篇
  免费   471篇
  国内免费   75篇
工业技术   9600篇
  2024年   30篇
  2023年   173篇
  2022年   448篇
  2021年   622篇
  2020年   408篇
  2019年   464篇
  2018年   535篇
  2017年   438篇
  2016年   459篇
  2015年   306篇
  2014年   392篇
  2013年   785篇
  2012年   451篇
  2011年   505篇
  2010年   359篇
  2009年   306篇
  2008年   270篇
  2007年   213篇
  2006年   209篇
  2005年   142篇
  2004年   120篇
  2003年   124篇
  2002年   113篇
  2001年   80篇
  2000年   76篇
  1999年   81篇
  1998年   216篇
  1997年   166篇
  1996年   139篇
  1995年   101篇
  1994年   88篇
  1993年   82篇
  1992年   60篇
  1991年   61篇
  1990年   49篇
  1989年   49篇
  1988年   38篇
  1987年   42篇
  1986年   40篇
  1985年   44篇
  1984年   36篇
  1983年   38篇
  1982年   21篇
  1981年   23篇
  1980年   21篇
  1979年   16篇
  1978年   25篇
  1977年   25篇
  1976年   36篇
  1972年   11篇
排序方式: 共有9600条查询结果,搜索用时 15 毫秒
101.
Many recent software engineering papers have examined duplicate issue reports. Thus far, duplicate reports have been considered a hindrance to developers and a drain on their resources. As a result, prior research in this area focuses on proposing automated approaches to accurately identify duplicate reports. However, there exists no studies that attempt to quantify the actual effort that is spent on identifying duplicate issue reports. In this paper, we empirically examine the effort that is needed for manually identifying duplicate reports in four open source projects, i.e., Firefox, SeaMonkey, Bugzilla and Eclipse-Platform. Our results show that: (i) More than 50 % of the duplicate reports are identified within half a day. Most of the duplicate reports are identified without any discussion and with the involvement of very few people; (ii) A classification model built using a set of factors that are extracted from duplicate issue reports classifies duplicates according to the effort that is needed to identify them with a precision of 0.60 to 0.77, a recall of 0.23 to 0.96, and an ROC area of 0.68 to 0.80; and (iii) Factors that capture the developer awareness of the duplicate issue’s peers (i.e., other duplicates of that issue) and textual similarity of a new report to prior reports are the most influential factors in our models. Our findings highlight the need for effort-aware evaluation of approaches that identify duplicate issue reports, since the identification of a considerable amount of duplicate reports (over 50 %) appear to be a relatively trivial task for developers. To better assist developers, research on identifying duplicate issue reports should put greater emphasis on assisting developers in identifying effort-consuming duplicate issues.  相似文献   
102.
103.
Reuse of software components, either closed or open source, is considered to be one of the most important best practices in software engineering, since it reduces development cost and improves software quality. However, since reused components are (by definition) generic, they need to be customized and integrated into a specific system before they can be useful. Since this integration is system-specific, the integration effort is non-negligible and increases maintenance costs, especially if more than one component needs to be integrated. This paper performs an empirical study of multi-component integration in the context of three successful open source distributions (Debian, Ubuntu and FreeBSD). Such distributions integrate thousands of open source components with an operating system kernel to deliver a coherent software product to millions of users worldwide. We empirically identified seven major integration activities performed by the maintainers of these distributions, documented how these activities are being performed by the maintainers, then evaluated and refined the identified activities with input from six maintainers of the three studied distributions. The documented activities provide a common vocabulary for component integration in open source distributions and outline a roadmap for future research on software integration.  相似文献   
104.
Use Case modeling is a popular technique for documenting functional requirements of software systems. Refactoring is the process of enhancing the structure of a software artifact without changing its intended behavior. Refactoring, which was first introduced for source code, has been extended for use case models. Antipatterns are low quality solutions to commonly occurring design problems. The presence of antipatterns in a use case model is likely to propagate defects to other software artifacts. Therefore, detection and refactoring of antipatterns in use case models is crucial for ensuring the overall quality of a software system. Model transformation can greatly ease several software development activities including model refactoring. In this paper, a model transformation approach is proposed for improving the quality of use case models. Model transformations which can detect antipattern instances in a given use case model, and refactor them appropriately are defined and implemented. The practicability of the approach is demonstrated by applying it on a case study that pertains to biodiversity database system. The results show that model transformations can efficiently improve quality of use case models by saving time and effort.  相似文献   
105.
Applying model predictive control (MPC) in some cases such as complicated process dynamics and/or rapid sampling leads us to poorly numerically conditioned solutions and heavy computational load. Furthermore, there is always mismatch in a model that describes a real process. Therefore, in this paper in order to prevail over the mentioned difficulties, we design a robust MPC using the Laguerre orthonormal basis in order to speed up the convergence at the same time with lower computation adding an extra parameter “a” in MPC. In addition, the Kalman state estimator is included in the prediction model and accordingly the MPC design is related to the Kalman estimator parameters as well as the error of estimations which helps the controller react faster against unmeasured disturbances. Tuning the parameters of the Kalman estimator as well as MPC is another achievement of this paper which guarantees the robustness of the system against the model mismatch and measurement noise. The sensitivity function at low frequency is minimized to tune the MPC parameters since the lower the magnitude of the sensitivity function at low frequency the better command tracking and disturbance rejection results. The integral absolute error (IAE) and peak of the sensitivity are used as constraints in optimization procedure to ensure the stability and robustness of the controlled process. The performance of the controller is examined via the controlling level of a Tank and paper machine processes.  相似文献   
106.
Artificial neural networks modeling have recently acquired enormous importance in microwave community especially in analyzing and synthesizing of microstrip antennas (MSAs) due to their generalization and adaptability features. A trained neural model estimates response very fast, which is nearly equal to its measured and/or simulated counterpart. Thus, it completely bypasses the repetitive use of conventional models as these models need rediscretization for every minor changes in the geometry, which itself is a time‐consuming exercise. The purpose of this article is to review this emerging area comprehensively for both analyzing and synthesizing of the MSAs. During reviewing process, some untouched cases are also observed, which are essentially required to be resolved for antenna designers. Unique and efficient neural networks‐based solutions are suggested for these cases. The proposed neural approaches are validated by fabricating and characterizing of the prototypes too. © 2015 Wiley Periodicals, Inc. Int J RF and Microwave CAE 25:747–757, 2015.  相似文献   
107.
108.
109.
This paper proposes a new feature extraction technique using wavelet based sub-band parameters (WBSP) for classification of unaspirated Hindi stop consonants. The extracted acoustic parameters show marked deviation from the values reported for English and other languages, Hindi having distinguishing manner based features. Since acoustic parameters are difficult to be extracted automatically for speech recognition. Mel Frequency Cepstral Coefficient (MFCC) based features are usually used. MFCC are based on short time Fourier transform (STFT) which assumes the speech signal to be stationary over a short period. This assumption is specifically violated in case of stop consonants. In WBSP, from acoustic study, the features derived from CV syllables have different weighting factors with the middle segment having the maximum. The wavelet transform has been applied to splitting of signal into 8 sub-bands of different bandwidths and the variation of energy in different sub-bands is also taken into account. WBSP gives improved classification scores. The number of filters used (8) for feature extraction in WBSP is less compared to the number (24) used for MFCC. Its classification performance has been compared with four other techniques using linear classifier. Further, Principal components analysis (PCA) has also been applied to reduce dimensionality.  相似文献   
110.
Bug fixing accounts for a large amount of the software maintenance resources. Generally, bugs are reported, fixed, verified and closed. However, in some cases bugs have to be re-opened. Re-opened bugs increase maintenance costs, degrade the overall user-perceived quality of the software and lead to unnecessary rework by busy practitioners. In this paper, we study and predict re-opened bugs through a case study on three large open source projects—namely Eclipse, Apache and OpenOffice. We structure our study along four dimensions: (1) the work habits dimension (e.g., the weekday on which the bug was initially closed), (2) the bug report dimension (e.g., the component in which the bug was found) (3) the bug fix dimension (e.g., the amount of time it took to perform the initial fix) and (4) the team dimension (e.g., the experience of the bug fixer). We build decision trees using the aforementioned factors that aim to predict re-opened bugs. We perform top node analysis to determine which factors are the most important indicators of whether or not a bug will be re-opened. Our study shows that the comment text and last status of the bug when it is initially closed are the most important factors related to whether or not a bug will be re-opened. Using a combination of these dimensions, we can build explainable prediction models that can achieve a precision between 52.1–78.6 % and a recall in the range of 70.5–94.1 % when predicting whether a bug will be re-opened. We find that the factors that best indicate which bugs might be re-opened vary based on the project. The comment text is the most important factor for the Eclipse and OpenOffice projects, while the last status is the most important one for Apache. These factors should be closely examined in order to reduce maintenance cost due to re-opened bugs.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号