首页 | 官方网站   微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   788篇
  免费   52篇
  国内免费   1篇
工业技术   841篇
  2024年   1篇
  2023年   18篇
  2022年   39篇
  2021年   69篇
  2020年   37篇
  2019年   29篇
  2018年   36篇
  2017年   33篇
  2016年   44篇
  2015年   31篇
  2014年   52篇
  2013年   52篇
  2012年   44篇
  2011年   75篇
  2010年   60篇
  2009年   43篇
  2008年   35篇
  2007年   21篇
  2006年   18篇
  2005年   18篇
  2004年   12篇
  2003年   13篇
  2002年   12篇
  2001年   3篇
  2000年   9篇
  1999年   3篇
  1998年   5篇
  1997年   4篇
  1996年   4篇
  1995年   7篇
  1994年   2篇
  1993年   2篇
  1992年   1篇
  1989年   1篇
  1988年   2篇
  1986年   1篇
  1985年   2篇
  1980年   1篇
  1978年   1篇
  1973年   1篇
排序方式: 共有841条查询结果,搜索用时 0 毫秒
11.
This paper presents fill algorithms for boundary-defined regions in raster graphics. The algorithms require only a constant-size working memory. The methods presented are based on the so-called seed fill algorithms that use the internal connectivity of the region with a given inner point. Basic methods, as well as additional hcuristics for speeding up the algorithm, are described and verified. Empirical results are used to compare the time complexities of the algorithms for different classes of regions.  相似文献   
12.
Attribute selection with fuzzy decision reducts   总被引:2,自引:0,他引:2  
Rough set theory provides a methodology for data analysis based on the approximation of concepts in information systems. It revolves around the notion of discernibility: the ability to distinguish between objects, based on their attribute values. It allows to infer data dependencies that are useful in the fields of feature selection and decision model construction. In many cases, however, it is more natural, and more effective, to consider a gradual notion of discernibility. Therefore, within the context of fuzzy rough set theory, we present a generalization of the classical rough set framework for data-based attribute selection and reduction using fuzzy tolerance relations. The paper unifies existing work in this direction, and introduces the concept of fuzzy decision reducts, dependent on an increasing attribute subset measure. Experimental results demonstrate the potential of fuzzy decision reducts to discover shorter attribute subsets, leading to decision models with a better coverage and with comparable, or even higher accuracy.  相似文献   
13.
Classifiers based on radial basis function neural networks have a number of useful properties that can be exploited in many practical applications. Using sample data, it is possible to adjust their parameters (weights), to optimize their structure, and to select appropriate input features (attributes). Moreover, interpretable rules can be extracted from a trained classifier and input samples can be identified that cannot be classified with a sufficient degree of “certainty”. These properties support an analysis of radial basis function classifiers and allow for an adaption to “novel” kinds of input samples in a real-world application. In this article, we outline these properties and show how they can be exploited in the field of intrusion detection (detection of network-based misuse). Intrusion detection plays an increasingly important role in securing computer networks. In this case study, we first compare the classification abilities of radial basis function classifiers, multilayer perceptrons, the neuro-fuzzy system NEFCLASS, decision trees, classifying fuzzy-k-means, support vector machines, Bayesian networks, and nearest neighbor classifiers. Then, we investigate the interpretability and understandability of the best paradigms found in the previous step. We show how structure optimization and feature selection for radial basis function classifiers can be done by means of evolutionary algorithms and compare this approach to decision trees optimized using certain pruning techniques. Finally, we demonstrate that radial basis function classifiers are basically able to detect novel attack types. The many advantageous properties of radial basis function classifiers could certainly be exploited in other application fields in a similar way.  相似文献   
14.
We present the language CRStL (Control Rule Strategy Language, pronounce “crystal”) to formulate mathematical reasoning techniques as proof strategies in the context of the proof assistant Ωmega. The language is arranged in two levels, a query language to access mathematical knowledge maintained in development graphs, and a strategy language to annotate the results of these queries with further control information. The two-leveled structure of the language allows the specification of proof techniques in a declarative way. We present the syntax and semantics of CRStL and illustrate its use by examples.  相似文献   
15.
16.
17.
Generating finite element discretizations with direct interface parameterizations constitutes a considerable computational expense in case of complex interface geometries. The paper at hand introduces a B-spline finite element method, which circumvents parameterization of interfaces and offers fast and easy meshing irrespective of the geometric complexity involved. Its core idea is the adaptive approximation of discontinuities by hierarchical grid refinement, which adds several levels of local basis functions in the close vicinity of interfaces, but unfitted to their exact location, so that a simple regular grid of knot span elements can be maintained. Numerical experiments show that an hp-refinement strategy, which simultaneously increases the polynomial degree of the B-spline basis and the levels of refinement around interfaces, achieves exponential rates of convergence despite the presence of discontinuities. It is also demonstrated that the hierarchical B-spline FEM can be used to transfer the recently introduced Finite Cell concept to geometrically nonlinear problems. Its computational performance, imposition of unfitted boundary conditions and fast hierarchical grid generation are illustrated for a set of benchmark problems in one, two and three dimensions, and the advantages of the regular grid approach for complex geometries are demonstrated by the geometrically nonlinear simulation of a voxel based foam composite.  相似文献   
18.
In this article, a new model predictive control approach to nonlinear stochastic systems will be presented. The new approach is based on particle filters, which are usually used for estimating states or parameters. Here, two particle filters will be combined, the first one giving an estimate for the actual state based on the actual output of the system; the second one gives an estimate of a control input for the system. This is basically done by adopting the basic model predictive control strategies for the second particle filter. Later in this paper, this new approach is applied to a CSTR (continuous stirred-tank reactor) example and to the inverted pendulum. These two examples show that our approach is also real-time-capable.  相似文献   
19.
Automatic cluster detection in Kohonen's SOM.   总被引:1,自引:0,他引:1  
Kohonen's self-organizing map (SOM) is a popular neural network architecture for solving problems in the field of explorative data analysis, clustering, and data visualization. One of the major drawbacks of the SOM algorithm is the difficulty for nonexpert users to interpret the information contained in a trained SOM. In this paper, this problem is addressed by introducing an enhanced version of the Clusot algorithm. This algorithm consists of two main steps: 1) the computation of the Clusot surface utilizing the information contained in a trained SOM and 2) the automatic detection of clusters in this surface. In the Clusot surface, clusters present in the underlying SOM are indicated by the local maxima of the surface. For SOMs with 2-D topology, the Clusot surface can, therefore, be considered as a convenient visualization technique. Yet, the presented approach is not restricted to a certain type of 2-D SOM topology and it is also applicable for SOMs having an n-dimensional grid topology.  相似文献   
20.
The ‘will, skill, tool’ model is a well-established theoretical framework that elucidates the conditions under which teachers are most likely to employ information and communication technologies (ICT) in the classroom. Past studies have shown that these three factors explain a very high degree of variance in the frequency of classroom ICT use. The present study replicates past findings using a different set of measures and hones in on possible subfactors. Furthermore, the study examines teacher affiliation for constructivist-style teaching, which is often considered to facilitate the pedagogical use of digital media. The study’s survey of 357 Swiss secondary school teachers reveals significant positive correlations between will, skill, and tool variables and the combined frequency and diversity of technology use in teaching. A multiple linear regression model was used to identify relevant subfactors. Five factors account for a total of 60% of the explained variance in the intensity of classroom ICT use. Computer and Internet applications are more often used by teachers in the classroom when: (1) teachers consider themselves to be more competent in using ICT for teaching; (2) more computers are readily available; (3) the teacher is a form teacher and responsible for the class; (4) the teacher is more convinced that computers improve student learning; and (5) the teacher more often employs constructivist forms of teaching and learning. The impact of constructivist teaching was small, however.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号