首页 | 官方网站   微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   6644篇
  免费   297篇
  国内免费   22篇
工业技术   6963篇
  2023年   76篇
  2022年   141篇
  2021年   217篇
  2020年   149篇
  2019年   200篇
  2018年   239篇
  2017年   163篇
  2016年   186篇
  2015年   153篇
  2014年   222篇
  2013年   418篇
  2012年   358篇
  2011年   398篇
  2010年   309篇
  2009年   298篇
  2008年   322篇
  2007年   302篇
  2006年   232篇
  2005年   221篇
  2004年   195篇
  2003年   195篇
  2002年   166篇
  2001年   93篇
  2000年   107篇
  1999年   85篇
  1998年   239篇
  1997年   169篇
  1996年   113篇
  1995年   120篇
  1994年   94篇
  1993年   88篇
  1992年   64篇
  1991年   52篇
  1990年   37篇
  1989年   32篇
  1988年   45篇
  1987年   38篇
  1986年   24篇
  1985年   36篇
  1984年   24篇
  1983年   17篇
  1982年   26篇
  1981年   30篇
  1980年   28篇
  1979年   24篇
  1978年   24篇
  1977年   30篇
  1976年   48篇
  1975年   15篇
  1972年   15篇
排序方式: 共有6963条查询结果,搜索用时 15 毫秒
71.
Evolution-in-materio uses evolutionary algorithms to exploit properties of materials to solve computational problems without requiring a detailed understanding of such properties. We show that using a purpose-built hardware platform called Mecobo, it is possible to solve computational problems by evolving voltages and signals applied to an electrode array covered with a carbon nanotube–polymer composite. We demonstrate for the first time that this methodology can be applied to function optimization and also to the tone discriminator problem (TDP). For function optimization, we evaluate the approach on a suite of optimization benchmarks and obtain results that in some cases come very close to the global optimum or are comparable with those obtained using well-known software-based evolutionary approach. We also obtain good results in comparison with prior work on the tone discriminator problem. In the case of the TDP we also investigated the relative merits of different mixtures of materials and organizations of electrode array.  相似文献   
72.
Sentiment analysis in text mining is a challenging task. Sentiment is subtly reflected by the tone and affective content of a writer’s words. Conventional text mining techniques, which are based on keyword frequencies, usually run short of accurately detecting such subjective information implied in the text. In this paper, we evaluate several popular classification algorithms, along with three filtering schemes. The filtering schemes progressively shrink the original dataset with respect to the contextual polarity and frequent terms of a document. We call this approach “hierarchical classification”. The effects of the approach in different combination of classification algorithms and filtering schemes are discussed over three sets of controversial online news articles where binary and multi-class classifications are applied. Meanwhile we use two methods to test this hierarchical classification model, and also have a comparison of the two methods.  相似文献   
73.
Severe slugging flow is always challenging in oil & gas production, especially for the current offshore based production. The slugging flow can cause a lot of problems, such as those relevant to production safety, fatigue as well as capability. As one typical phenomenon in multi-phase flow dynamics, the slug can be avoided or eliminated by proper facility design or control of operational conditions. Based on a testing facility which can emulate a pipeline-riser or a gas-lifted production well in a scaled-down manner, this paper experimentally studies the correlations of key operational parameters with severe slugging flows. These correlations are reflected through an obtained stable surface in the parameter space, which is a natural extension of the bifurcation plot. The maximal production opportunity without compromising the stability is also studied. Relevant studies have already showed that the capability, performance and efficiency of anti-slug control can be dramatically improved if these stable surfaces can be experimentally determined beforehand. The paper concludes that obtaining the stable surface on the new developed map can significantly improve the production rate in a control scheme. Even though the production rate can be further improved by moving the stable surface using advanced control strategies, the constant inputs can in some cases be preferable due to the easier implementation.  相似文献   
74.
75.
In photorealistic image synthesis the radiative transfer equation is often not solved by simulating every wavelength of light, but instead by computing tristimulus transport, for instance using sRGB primaries as a basis. This choice is convenient, because input texture data is usually stored in RGB colour spaces. However, there are problems with this approach which are often overlooked or ignored. By comparing to spectral reference renderings, we show how rendering in tristimulus colour spaces introduces colour shifts in indirect light, violation of energy conservation, and unexpected behaviour in participating media. Furthermore, we introduce a fast method to compute spectra from almost any given XYZ input colour. It creates spectra that match the input colour precisely. Additionally, like in natural reflectance spectra, their energy is smoothly distributed over wide wavelength bands. This method is both useful to upsample RGB input data when spectral transport is used and as an intermediate step for corrected tristimulus‐based transport. Finally, we show how energy conservation can be enforced in RGB by mapping colours to valid reflectances.  相似文献   
76.
Informed by research over the past two years and the work of colleagues in peer institutions, the IFI Irish Film Archive developed a six-year Digital Preservation and Access strategy which launched in 2014. Fundamental to this Strategy was the design and installation of digital archive tools for long term preservation and the redesign of workflow practices to facilitate accession and management of high resolution digital film and broadcast assets and associated metadata. This article outlines the steps taken and standards applied in developing a future proof digital audiovisual archive.  相似文献   
77.
This article reviews the extensive literature emerging from studies concerned with skill acquisition and the development of knowledge representation in programming. In particular, it focuses upon theories of program comprehension that suggest programming knowledge can be described in terms of stereotypical knowledge structures that can in some way capture programming expertise independently of the programming language used and in isolation from a programmer's specific training experience. An attempt is made to demonstrate why existing views are inappropriate. On the one hand, programs are represented in terms of a variety of formal notations ranging from the quasi‐mathematical to the near textual. It is argued that different languages may lead to different forms of knowledge representation, perhaps emphasizing certain structures at the expense of others or facilitating particular strategies. On the other hand, programmers are typically taught problem‐solving techniques that suggest a strict approach to problem decomposition. Hence, it seems likely that another factor that may mediate the development of knowledge representation, and that has not received significant attention elsewhere, is related to the training experience that programmers typically encounter. In this article, recent empirical studies that have addressed these issues are reviewed, and the implications of these studies for theories of skill acquisition and for knowledge representation are discussed. In conclusion, a more extensive account of knowledge representation in programming is presented that emphasizes training effects and the role played by specific language features in the development of knowledge representation within the programming domain.  相似文献   
78.
Distributed video coding (DVC) constitutes an original coding framework to meet the stringent requirements imposed by uplink-oriented and low-power mobile video applications. The quality of the side information available to the decoder and the efficiency of the employed channel codes are primary factors determining the success of a DVC system. This contribution introduces two novel techniques for probabilistic motion compensation in order to generate side information at the Wyner-Ziv decoder. The employed DVC scheme uses a base layer, serving as a hash to facilitate overlapped block motion estimation at the decoder side. On top of the base layer, a supplementary Wyner-Ziv layer is coded in the DCT domain. Both proposed probabilistic motion compensation techniques are driven by the actual correlation channel statistics and reuse information contained in the hash. Experimental results report significant rate savings caused by the novel side information generation methods compared to previous techniques. Moreover, the compression performance of the presented DVC architecture, featuring the proposed side-information generation techniques, delivers state-of-the-art compression performance.  相似文献   
79.
80.
This essay begins with discussion of four relatively recent works which are representative of major themes and preoccupations in Artificial Life Art: ‘Propagaciones’ by Leo Nuñez; ‘Sniff’ by Karolina Sobecka and Jim George; ‘Universal Whistling Machine’ by Marc Boehlen; and ‘Performative Ecologies’ by Ruari Glynn. This essay is an attempt to contextualise these works by providing an overview of the history and forms of Artificial Life Art as it has developed over two decades, along with some background in the ideas of the Artificial Life movement of the late 1980s and 1990s.1 A more extensive study of the theoretical history of Artificial Life can be found in my paper ‘Artificial Life Art—A Primer’, in the Proceedings of DAC09 and also at http://www.ace.uci.edu/Penny. Excerpts from that essay are included here.   相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号