首页 | 官方网站   微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3757篇
  免费   183篇
  国内免费   3篇
工业技术   3943篇
  2023年   40篇
  2022年   66篇
  2021年   110篇
  2020年   79篇
  2019年   90篇
  2018年   99篇
  2017年   84篇
  2016年   98篇
  2015年   90篇
  2014年   125篇
  2013年   228篇
  2012年   182篇
  2011年   256篇
  2010年   171篇
  2009年   159篇
  2008年   188篇
  2007年   175篇
  2006年   137篇
  2005年   104篇
  2004年   91篇
  2003年   100篇
  2002年   77篇
  2001年   57篇
  2000年   64篇
  1999年   45篇
  1998年   89篇
  1997年   78篇
  1996年   63篇
  1995年   36篇
  1994年   59篇
  1993年   39篇
  1992年   46篇
  1991年   27篇
  1990年   40篇
  1989年   38篇
  1988年   34篇
  1987年   40篇
  1986年   19篇
  1985年   30篇
  1984年   33篇
  1983年   32篇
  1982年   34篇
  1981年   17篇
  1980年   26篇
  1979年   27篇
  1978年   28篇
  1977年   25篇
  1976年   24篇
  1975年   26篇
  1973年   16篇
排序方式: 共有3943条查询结果,搜索用时 0 毫秒
51.
This study examines research published in the first 24 years of Information Systems Journal's (ISJ) publication history using a thematic space of all information systems (IS) research as the backdrop. To that end, abstracts from all contributing articles published in eight prominent IS journals in the period 1991–2014 were analysed to extract a latent semantic space of five broad research areas. A two‐dimensional projection of the results was used to create a two‐by‐two map, where one dimension represents the European vs. North American style of IS research and another dimension represents a micro vs. macro level of IS research. The ISJ is positioned in the ‘micro and European school’ quadrant. Over the course of the journal's first 24 years, research in the ISJ started with a relative focus on the IT artefact and IS development and gradually moved towards a more balanced position that includes a considerable amount of research on IT for teamwork and collaboration, as well as on IT and individuals.  相似文献   
52.
Research on elderly people's ICT acceptance and use often relies on the technology acceptance model (TAM) framework, but has been mostly limited to task-oriented uses. This article expands approaches in technology acceptance and use by developing a model to explain entertainment-related uses of new media technology by elderly people. On a theoretical level, we expand the TAM perspective by adding concepts that act as barriers and/or facilitators of technology acceptance, namely technophobia, self-efficacy and previous experience and expertise with technology. We develop an expanded TAM by testing the role of these concepts in two studies on entertainment media technology. In Study 1, we investigate behavioural intention to use 3D cinema among N?=?125 German elderly media users (Age 50+). In Study 2, we focus the actual use of a computer game simulation by N?=?115 German and US elderly media users (Age 50+). Findings in both studies point towards the central role of perceived usefulness, here modelled as enjoyment, as the reason for elderly people's use and acceptance of entertainment media technology. Perceived ease of use is seen as a precondition for enjoyment, particularly for interactive media.  相似文献   
53.
54.
This short note considers and resolves the apparent contradiction between known worst-case complexity results for first- and second-order methods for solving unconstrained smooth nonconvex optimization problems and a recent note by Jarre [On Nesterov's smooth Chebyshev–Rosenbrock function, Optim. Methods Softw. (2011)] implying a very large lower bound on the number of iterations required to reach the solution's neighbourhood for a specific problem with variable dimension.  相似文献   
55.
A commonly used model for fault-tolerant computation is that of cellular automata. The essential difficulty of fault-tolerant computation is present in the special case of simply remembering a bit in the presence of faults, and that is the case we treat in this paper. We are concerned with the degree (the number of neighboring cells on which the state transition function depends) needed to achieve fault tolerance when the fault rate is high (nearly 1/2). We consider both the traditional transient fault model (where faults occur independently in time and space) and a recently introduced combined fault model which also includes manufacturing faults (which occur independently in space, but which affect cells for all time). We also consider both a purely probabilistic fault model (in which the states of cells are perturbed at exactly the fault rate) and an adversarial model (in which the occurrence of a fault gives control of the state to an omniscient adversary). We show that there are cellular automata that can tolerate a fault rate 1/2−ξ (with ξ>0) with degree O((1/ξ2)log(1/ξ)), even with adversarial combined faults. The simplest such automata are based on infinite regular trees, but our results also apply to other structures (such as hyperbolic tessellations) that contain infinite regular trees. We also obtain a lower bound of Ω(1/ξ2), even with only purely probabilistic transient faults.  相似文献   
56.
Mutual Information (MI) is popular for registration via function optimisation. This work proposes an inverse compositional formulation of MI for Levenberg-Marquardt optimisation. This yields a constant Hessian, which may be pre-computed. Speed improvements of 15% were obtained, with convergence accuracies similar to those of the standard formulation.  相似文献   
57.
We present here a new randomized algorithm for repairing the topology of objects represented by 3D binary digital images. By “repairing the topology”, we mean a systematic way of modifying a given binary image in order to produce a similar binary image which is guaranteed to be well-composed. A 3D binary digital image is said to be well-composed if, and only if, the square faces shared by background and foreground voxels form a 2D manifold. Well-composed images enjoy some special properties which can make such images very desirable in practical applications. For instance, well-known algorithms for extracting surfaces from and thinning binary images can be simplified and optimized for speed if the input image is assumed to be well-composed. Furthermore, some algorithms for computing surface curvature and extracting adaptive triangulated surfaces, directly from the binary data, can only be applied to well-composed images. Finally, we introduce an extension of the aforementioned algorithm to repairing 3D digital multivalued images. Such an algorithm finds application in repairing segmented images resulting from multi-object segmentations of other 3D digital multivalued images.
James GeeEmail:
  相似文献   
58.
This paper describes the development and validation of the Australian Land Erodibility Model (AUSLEM), designed to predict land susceptibility to wind erosion in western Queensland, Australia. The model operates at a 5 × 5 km spatial resolution on a daily time-step with inputs of grass and tree cover, soil moisture, soil texture and surficial stone cover. The system was implemented to predict land erodibility, i.e. susceptibility to wind erosion, for the period 1980–1990. Model performance was evaluated using cross-correlation analyses to compare trajectories of mean annual land erodibility at selected locations with trends in wind speed and observational records of dust events and a Dust Storm Index (DSI). The validation was conducted at four spatial length scales from 25 to 150 km using windows to represent potential dust source areas centered on and positioned around eight meteorological stations within the study area. The predicted land erodibility had strong correlations with dust-event frequencies at half of the stations. Poor correlations at the other stations were linked to the inability of the model to account for temporal changes in soil erodibility, and comparing trends in the land erodibility of regions with dust events whose source areas lie outside the regions of interest. The model agreement with dust-event frequency trends was found to vary across spatial scales and was highly dependent on land type characteristics around the stations and on the types of dust events used for validation.  相似文献   
59.
This paper focuses on hierarchical classification problems where the classes to be predicted are organized in the form of a tree. The standard top-down divide and conquer approach for hierarchical classification consists of building a hierarchy of classifiers where a classifier is built for each internal (non-leaf) node in the class tree. Each classifier discriminates only between its child classes. After the tree of classifiers is built, the system uses them to classify test examples one class level at a time, so that when the example is assigned a class at a given level, only the child classes need to be considered at the next level. This approach has the drawback that, if a test example is misclassified at a certain class level, it will be misclassified at deeper levels too. In this paper we propose hierarchical classification methods to mitigate this drawback. More precisely, we propose a method called hierarchical ensemble of hierarchical rule sets (HEHRS), where different ensembles are built at different levels in the class tree and each ensemble consists of different rule sets built from training examples at different levels of the class tree. We also use a particle swarm optimisation (PSO) algorithm to optimise the rule weights used by HEHRS to combine the predictions of different rules into a class to be assigned to a given test example. In addition, we propose a variant of a method to mitigate the aforementioned drawback of top-down classification. These three types of methods are compared against the standard top-down hierarchical classification method in six challenging bioinformatics datasets, involving the prediction of protein function. Overall HEHRS with the rule weights optimised by the PSO algorithm obtains the best predictive accuracy out of the four types of hierarchical classification method.  相似文献   
60.
Wireless sensor networks are increasingly seen as a solution to the problem of performing continuous wide-area monitoring in many environmental, security, and military scenarios. The distributed nature of such networks and the autonomous behavior expected of them present many novel challenges. In this article, the authors argue that a new synthesis of electronic engineering and agent technology is required to address these challenges, and they describe three examples where this synthesis has succeeded. In more detail, they describe how these novel approaches address the need for communication and computationally efficient decentralized algorithms to coordinate the behavior of physically distributed sensors, how they enable the real-world deployment of sensor agent platforms in the field, and finally, how they facilitate the development of intelligent agents that can autonomously acquire data from these networks and perform information processing tasks such as fusion, inference, and prediction.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号