首页 | 官方网站   微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   84504篇
  免费   1034篇
  国内免费   412篇
工业技术   85950篇
  2021年   27篇
  2020年   16篇
  2019年   19篇
  2018年   14473篇
  2017年   13393篇
  2016年   9990篇
  2015年   619篇
  2014年   255篇
  2013年   240篇
  2012年   3190篇
  2011年   9468篇
  2010年   8327篇
  2009年   5604篇
  2008年   6831篇
  2007年   7826篇
  2006年   149篇
  2005年   1238篇
  2004年   1147篇
  2003年   1199篇
  2002年   562篇
  2001年   119篇
  2000年   188篇
  1999年   76篇
  1998年   84篇
  1997年   61篇
  1996年   63篇
  1995年   31篇
  1994年   25篇
  1993年   13篇
  1992年   17篇
  1991年   31篇
  1990年   9篇
  1988年   16篇
  1982年   8篇
  1969年   28篇
  1968年   46篇
  1967年   36篇
  1966年   42篇
  1965年   44篇
  1964年   11篇
  1963年   28篇
  1962年   22篇
  1961年   18篇
  1960年   30篇
  1959年   35篇
  1958年   37篇
  1957年   36篇
  1956年   34篇
  1955年   63篇
  1954年   68篇
排序方式: 共有10000条查询结果,搜索用时 50 毫秒
991.
Scalability is a main and urgent problem in evolvable hardware (EHW) field. For the design of large circuits, an EHW method with a decomposition strategy is able to successfully find a solution, but requires a large complexity and evolution time. This study aims to optimize the decomposition on large-scale circuits so that it provides a solution for the EHW method to scalability and improves the efficiency. This paper proposes a projection-based decomposition (PD), together with Cartesian genetic programming (CGP) as an EHW system namely PD-CGP, to design relatively large circuits. PD gradually decomposes a Boolean function by adaptively projecting it onto the property of variables, which makes the complexity and number of sub-logic blocks minimized. CGP employs an evolutionary strategy to search for the simple and compact solutions of these sub-blocks. The benchmark circuits from the MCNC library, \(n\)-parity circuits, and arithmetic circuits are used in the experiment to prove the ability of PD-CGP in solving scalability and efficiency. The results illustrate that PD-CGP is superior to 3SD-ES in evolving large circuits in terms of complexity reduction. PD-CGP also outperforms GDD+GA in evolving relatively large arithmetic circuits. Additionally, PD-CGP successfully evolves larger \(n\)-even-parity and arithmetic circuits, which have not done by other approaches.  相似文献   
992.
Minimal attribute reduction plays an important role in rough set. Heuristic algorithms are proposed in literature reviews to get a minimal reduction and yet an unresolved issue is that many redundancy non-empty elements involving duplicates and supersets exist in discernibility matrix. To be able to eliminate the related redundancy and pointless elements, in this paper, we propose a compactness discernibility information tree (CDI-tree). The CDI-tree has the ability to map non-empty elements into one path and allow numerous non-empty elements share the same prefix, which is recognized as a compact structure to store non-empty elements in discernibility matrix. A complete algorithm is presented to address Pawlak reduction based on CDI-tree. The experiment results reveal that the proposed algorithm is more efficient than the benchmark algorithms to find out a minimal attribute reduction.  相似文献   
993.
In this study, a novel online support vector regressor (SVR) controller based on system model estimated by a separate online SVR is proposed. The main idea is to obtain an SVR controller based on an estimated model of the system by optimizing the margin between reference input and system output. For this purpose, “closed-loop margin” which depends on tracking error is defined, then the parameters of the SVR controller are optimized so as to optimize the closed-loop margin and minimize the tracking error. In order to construct the closed-loop margin, the model of the system estimated by an online SVR is utilized. The parameters of the SVR controller are adjusted via the SVR model of system. The stability of the closed-loop system has also been analyzed. The performance of the proposed method has been evaluated by simulations carried out on a continuously stirred tank reactor (CSTR) and a bioreactor, and the results show that SVR model and SVR controller attain good modeling and control performances.  相似文献   
994.
Because most of runoff time series with limited amount of data reveal inherently nonlinear and stochastic characteristics and tend to show chaotic behavior, strategies based on chaotic analysis are popular methods to analyze them from real systems in nonlinear dynamics. Only one kind of predicted method for yearly rainfall-runoff forecasting cannot achieve perfect performance. Thus, a mixture strategy denoted by WT-PSR-GA-NN, which is composed of wavelet transform (WT), phase space reconstruction (PSR), neural network (NN) and genetic algorithm (GA), is presented in this paper. In the WT-PSR-GA-NN framework, the process to deal with time series gathered from Liujiang River runoff data is given as follows: (1) the runoff time series was first decomposed into low-frequency and high-frequency sub-series by wavelet transformation; (2) the two sub-series were separately and independently reconstructed into phase spaces; (3) the transformed time series in the reconstructed phase spaces were modeled by neural network, which is trained by genetic algorithm to avoid trapping into local minima; (4) the predicted results in low-frequency parts were combined with the ones of high-frequency parts, and reconstructed with wavelet inverse transformation, to form the future behavior of the runoff. Experiments show that WT-PSR-GA-NN is effective and its forecasting results are high in accuracy not only for the short-term yearly hydrological time series but also for the long-term one. The comparison results revealed that the overall forecasting performance of WT-PSR-GA-NN proposed by us is superior to other popularity methods for all the test cases. We can conclude that WT-PSR-GA-NN can not only increase the forecasted accuracy, but also its own competitiveness in efficiency, effectiveness and robustness.  相似文献   
995.
The changes of face images with poses and polarized illuminations increase data uncertainty in face recognition. In fact, synthesized mirror samples can be recognized as representations of the left–right deflection of poses or illuminations of the face. Symmetrical face images generated from the original face images also provide more observations of the face which is useful for improving the accuracy of face recognition. In this paper, to the best of our knowledge, it is the first time that the well-known minimum squared error classification (MSEC) algorithm is used to perform face recognition on an extended face database using synthesized mirror training samples, which is titled as extended minimum squared error classification (EMSEC). By modifying the MSE classification rule, we append the mirror samples to the training set for gaining better classification performance. First, we merge original training samples and mirror samples synthesized from original training samples per subject as mixed training samples. Second, EMSEC algorithm exploits mixed training samples to obtain the projection matrix that can best transform the mixed training samples into predefined class labels. Third, the projection matrix is exploited to simultaneously obtain transform results of the test sample and its nearest neighbor from the mixed training sample set. Finally, we ultimately classify the test sample by combining the transform results of the test sample and the nearest neighbor. As an extension of MSEC, EMSEC reduces the uncertainty of the face observation by auxiliary mirror samples, so that it has better robustness classification performance than traditional MSEC. Experimental results on the ORL, GT, and FERET databases show that EMSEC has better generalization ability than traditional MSEC.  相似文献   
996.
Attributes proof in anonymous credential systems is an effective way to balance security and privacy in user authentication; however, the linear complexity of attributes proof causes the existing anonymous credential systems far away from being practical, especially on resource-limited smart devices. For efficiency considerations, we present a novel pairing-based anonymous credential system which solves the linear complexity of attributes proof based on aggregate signature scheme. We propose two extended signature schemes, BLS+ and BGLS+, to be cryptographical building blocks for constructing anonymous credentials in the random oracle model. Identity-like information of message holder is encoded in a signature in order that the message holder can prove the possession of the input message along with the validity of a signature. We present issuance protocol for anonymous credentials embedding weak attributes which are referred to what cannot identify a user in a population. Users can prove any combination of attributes all at once by aggregating the corresponding individual credentials into one. The attributes proof protocols on AND and OR relation over multiple attributes are also given. The performance analysis shows that the aggregation-based anonymous credential system outperforms both the conventional Camenisch–Lysyanskaya pairing-based system and the accumulator-based system when prove AND and OR relation over multiple attributes, and the size of credential and public parameters are shorter as well.  相似文献   
997.
File format vulnerabilities have been highlighted in recent years, and the performance of fuzzing tests relies heavily on the knowledge of target formats. In this paper, we present systematic algorithms and methods to automatically reverse engineer input file formats. The methodology employs dynamic taint analysis to reveal implicit relational information between input file and binary procedures, which is used for the measurement of correlations among data bytes, format segmentation and data type inference. We have implemented a prototype, and its general tests on 10 well-published binary formats yielded an average of over 85 % successful identification rate, while more detailed structural information was unveiled beyond coarse granular format analysis. Besides, a practical pseudo-fuzzing evaluation method is discussed in accordance with real-world demands on security analysis, and the evaluation results demonstrated the practical effectiveness of our system.  相似文献   
998.
Evolutionary multi-objective optimization algorithms are generally employed to generate Pareto optimal solutions by exploring the search space. To enhance the performance, exploration by global search can be complemented with exploitation by combining it with local search. In this paper, we address the issues in integrating local search with global search such as: how to select individuals for local search; how deep the local search is performed; how to combine multiple objectives into single objective for local search. We introduce a Preferential Local Search mechanism to fine tune the global optimal solutions further and an adaptive weight mechanism for combining multi-objectives together. These ideas have been integrated into NSGA-II to arrive at a new memetic algorithm for solving multi-objective optimization problems. The proposed algorithm has been applied on a set of constrained and unconstrained multi-objective benchmark test suite. The performance was analyzed by computing different metrics such as Generational distance, Spread, Max spread, and HyperVolume Ratio for the test suite functions. Statistical test applied on the results obtained suggests that the proposed algorithm outperforms the state-of-art multi-objective algorithms like NSGA-II and SPEA2. To study the performance of our algorithm on a real-world application, Economic Emission Load Dispatch was also taken up for validation. The performance was studied with the help of measures such as Hypervolume and Set Coverage Metrics. Experimental results substantiate that our algorithm has the capability to solve real-world problems like Economic Emission Load Dispatch and is able to produce better solutions, when compared with NSGA-II, SPEA2, and traditional memetic algorithms with fixed local search steps.  相似文献   
999.
Support vector machines are the popular machine learning techniques. Its variant least squares support vector regression (LS-SVR) is effective for image denoising. However, the fitting of the samples contaminated by noises in the training phase will result in the fact that LS-SVR cannot work well when noise level is too far from it or noise density is high. Type-2 fuzzy sets and systems have been shown to be a more promising method to manifest the uncertainties. Various noises would be taken as uncertainties in scene images. By integrating the design of learning weights with type-2 fuzzy sets, a systematic design methodology of interval type-2 fuzzy density weighted support vector regression (IT2FDW-SVR) model for scene denoising is presented to address the problem of sample uncertainty in scene images. A novel strategy is used to design the learning weights, which is similar to the selection of human experience. To handle the uncertainty of sample density, interval type-2 fuzzy logic system (IT2FLS) is employed to deduce the fuzzy learning weights (IT2FDW) in the IT2FDW-SVR, which is an extension of the previously weighted SVR. Extensive experimental results demonstrate that the proposed method can achieve better performances in terms of both objective and subjective evaluations than those state-of-the-art denoising techniques.  相似文献   
1000.
Knowledge representation using interval-valued fuzzy formal concept lattice   总被引:1,自引:0,他引:1  
Formal concept analysis (FCA) is a mathematical framework for data analysis and processing tasks. Based on the lattice and order theory, FCA derives the conceptual hierarchies from the relational information systems. From the crisp setting, FCA has been extended to fuzzy environment. This extension is aimed at handling the uncertain and vague information represented in the form of a formal context whose entries are the degrees from the scale [0, 1]. The present study analyzes the fuzziness in a given many-valued context which is transformed into a fuzzy formal context, to provide an insight into generating the fuzzy formal concepts from the fuzzy formal context. Furthermore, considering that a major problem in FCA with fuzzy setting is to reduce the number of fuzzy formal concepts thereby simplifying the corresponding fuzzy concept lattice structure, the current paper solves the problem by linking an interval-valued fuzzy graph to the fuzzy concept lattice. For this purpose, we propose an algorithm for generating the interval-valued fuzzy formal concepts. To measure the weight of fuzzy formal concepts, an algorithm is proposed using Shannon entropy. The knowledge represented by formal concepts using interval-valued fuzzy graph is compared with entropy-based-weighted fuzzy concepts at chosen threshold.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号