首页 | 官方网站   微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   241篇
  免费   7篇
工业技术   248篇
  2023年   2篇
  2021年   2篇
  2019年   4篇
  2018年   9篇
  2017年   7篇
  2015年   3篇
  2014年   6篇
  2013年   16篇
  2012年   9篇
  2011年   20篇
  2010年   5篇
  2009年   8篇
  2008年   18篇
  2007年   15篇
  2006年   3篇
  2005年   6篇
  2004年   4篇
  2003年   4篇
  2002年   6篇
  2001年   6篇
  2000年   5篇
  1999年   6篇
  1998年   6篇
  1997年   7篇
  1996年   10篇
  1995年   4篇
  1994年   6篇
  1993年   3篇
  1992年   4篇
  1990年   2篇
  1989年   3篇
  1988年   3篇
  1987年   1篇
  1986年   1篇
  1984年   3篇
  1983年   2篇
  1982年   2篇
  1981年   1篇
  1978年   1篇
  1976年   7篇
  1975年   4篇
  1974年   2篇
  1972年   1篇
  1970年   1篇
  1968年   1篇
  1967年   2篇
  1966年   1篇
  1964年   1篇
  1963年   1篇
  1957年   1篇
排序方式: 共有248条查询结果,搜索用时 286 毫秒
1.
To satisfy public demands for environmental values, forest companies are facing the prospect of a reduction in wood supply and increases in costs. Some Canadian provincial governments have proposed intensifying silviculture in special zones dedicated to timber production as the means for pushing out the forest possibility frontiers. In this paper, we compare the traditional two‐zone land allocation framework which includes ecological reserves and integrated forest management zones with the triad — a three‐zone scheme which adds a zone dedicated to intensive timber production. We compare the solutions of the mixed‐integer linear programs formulated under both land‐allocation frameworks. We explore through sensitivity analysis the conditions under which the triad regime can offset the impact on timber production from increased environmental demands. We show that under the realistic conditions characteristic to Coastal British Columbia, higher environmental demands may be satisfied under the triad regime without increasing the financial burdens on the industry or reducing its wood supply. This occurs, however, only if regulatory constraints in timber production zone are flexible.  相似文献   
2.
The vicissitudes of the Israeli-Palestinian peace process since 1967 are analyzed using attitudes and related concepts where relevant. The 1967 war returned the two peoples' zero-sum conflict around national identity to its origin as a conflict within the land both peoples claim. Gradually, new attitudes evolved regarding the necessity and possibility of negotiations toward a two-state solution based on mutual recognition, which became the building stones of the 1993 Oslo agreement. Lacking a commitment to a final outcome, the Oslo-based peace process was hampered by reserve options, which increased avoidance at the expense of approach tendencies as the parties moved toward a final agreement. The resulting breakdown of the process in 2000 produced clashing narratives, reflecting different anchors for judgment and classical mirror images. Public support for violence increased, even as public opinion continued to favor a negotiated two-state solution. Reviving the peace process requires mutual reassurance about the availability of a partner for negotiating a principled peace based on a historic compromise that meets the basic needs and validates the identities of both peoples. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   
3.
The application of transcranial Doppler (TCD) ultrasonography to asymptomatic prosthetic heart valve patients can result in detection of localized bursts of high intensity signals, similar to those caused by the passage of emboli. The origin of these signals is not known. In order to investigate this phenomenon in a simplified, more controllable environment, a TCD machine was used to record flow downstream from mechanical prosthetic heart valves in a mock circulatory loop. The model, which uses a saline solution seeded with silk particles (< 15 micrometers) as the circulatory fluid, recreates the principal hydrodynamic characteristics of the left heart and systemic circulation. Reproducibility of the system was established through repeated testing of a Monostrut valve. Three different mechanical valve types, (Monostrut, Medtronic Hall, St. Jude Medical) were tested over a range of simulated cardiac outputs, and the effect of valve size was investigated with four Omniscience tilting disc valves (21, 23, 25 and 29 mm). Average energy of the reflected Doppler signal was used to quantify the amount of high intensity Doppler signal, QTCD. TCD signals recorded in vitro were visually and aurally similar to those found in prosthetic heart valve patients. All valve types generated exponentially more QTCD with increasing simulated cardiac output. Differences amongst valve types were only significant at higher flow outputs, with the Monostrut valve producing the greatest QTCD. Larger valves consistently generated greater QTCD than smaller valves. In conclusion, TCD signals found in prosthetic heart valve patients can be reproduced, at least qualitatively, using a mock circulatory loop which does not incorporate the formed elements of blood.(ABSTRACT TRUNCATED AT 250 WORDS)  相似文献   
4.
Optimal implementations of UPGMA and other common clustering algorithms   总被引:2,自引:0,他引:2  
In this work we consider hierarchical clustering algorithms, such as UPGMA, which follow the closest-pair joining scheme. We survey optimal O(n2)-time implementations of such algorithms which use a ‘locally closest’ joining scheme, and specify conditions under which this relaxed joining scheme is equivalent to the original one (i.e. ‘globally closest’).  相似文献   
5.
Three experiments were conducted to investigate whether response processes can start before memory scanning has finished when both are required in the same task. In Experiment 1 the color of a stimulus letter determined which hand might respond, and the letter's memory set membership determined whether that response should be made or withheld. Electrophysiological data suggested that lateralized response preparation was not initiated until memory scanning finished. Experiment 2 replicated these results with a consistent stimulus-response mapping to make the scanning process easier. Experiment 3 tested for earlier response priming with a probe reaction time paradigm, and the results suggested that color information can be used to activate a response before memory scanning is finished. The results of Experiments 1-3 suggest that interference between memory scanning and response preparation precludes the concurrent operation of these processes.  相似文献   
6.
Escherichia coli DNA polymerase III holoenzyme contains 10 different subunits which assort into three functional components: a core catalytic unit containing DNA polymerase activity, the beta sliding clamp that encircles DNA for processive replication, and a multisubunit clamp loader apparatus called gamma complex that uses ATP to assemble the beta clamp onto DNA. We examine here the function of the psi subunit of the gamma complex clamp loader. Omission of psi from the holoenzyme prevents contact with single-stranded DNA-binding protein (SSB) and lowers the efficiency of clamp loading and chain elongation under conditions of elevated salt. We also show that the product of a classic point mutant of SSB, SSB-113, lacks strong affinity for psi and is defective in promoting clamp loading and processive replication at elevated ionic strength. SSB-113 carries a single amino acid replacement at the penultimate residue of the C-terminus, indicating the C-terminus as a site of interaction with psi. Indeed, a peptide of the 15 C-terminal residues of SSB is sufficient to bind to psi. These results establish a role for the psi subunit in contacting SSB, thus enhancing the clamp loading and processivity of synthesis of the holoenzyme, presumably by helping to localize the holoenzyme to sites of SSB-coated ssDNA.  相似文献   
7.
The scanning electrochemical microscope (SECM) is one of the scanning probe techniques that have been developed following the introduction of the scanning tunneling microscope. The approaches that have been used to modify surfaces with lateral resolution using the SECM are presented and discussed. These approaches made it possible to drive a variety of microelectrochemical reactions on surfaces, as well as to study the mechanism of these processes due to the unique advantages that the SECM offers.  相似文献   
8.
We present a novel algorithm for detection of certain types of unusual events. The algorithm is based on multiple local monitors which collect low-level statistics. Each local monitor produces an alert if its current measurement is unusual, and these alerts are integrated to a final decision regarding the existence of an unusual event. Our algorithm satisfies a set of requirements that are critical for successful deployment of any large-scale surveillance system. In particular it requires a minimal setup (taking only a few minutes) and is fully automatic afterwards. Since it is not based on objects' tracks, it is robust and works well in crowded scenes where tracking-based algorithms are likely to fail. The algorithm is effective as soon as sufficient low-level observations representing the routine activity have been collected, which usually happens after a few minutes. Our algorithm runs in realtime. It was tested on a variety of real-life crowded scenes. A ground-truth was extracted for these scenes, with respect to which detection and false-alarm rates are reported.  相似文献   
9.
The capacity defines the ultimate fidelity limits of information transmission by any system. We derive the capacity of parallel Poisson process channels to judge the relative effectiveness of neural population structures. Because the Poisson process is equivalent to a Bernoulli process having small event probabilities, we infer the capacity of multi-channel Poisson models from their Bernoulli surrogates. For neural populations wherein each neuron has individual innervation, inter-neuron dependencies increase capacity, the opposite behavior of populations that share a single input. We use Shannon's rate-distortion theory to show that for Gaussian stimuli, the mean-squared error of the decoded stimulus decreases exponentially in both the population size and the maximal discharge rate. Detailed analysis shows that population coding is essential for accurate stimulus reconstruction. By modeling multi-neuron recordings as a sum of a neural population, we show that the resulting capacity is much less than the population's, reducing it to a level that can be less than provided with two separated neural responses. This result suggests that attempting neural control without spike sorting greatly reduces the achievable fidelity. In contrast, single-electrode neural stimulation does not incur any capacity deficit in comparison to stimulating individual neurons.  相似文献   
10.
We present a new method for recovering the 3D shape of a featureless smooth surface from three or more calibrated images illuminated by different light sources (three of them are independent). This method is unique in its ability to handle images taken from unconstrained perspective viewpoints and unconstrained illumination directions. The correspondence between such images is hard to compute and no other known method can handle this problem locally from a small number of images. Our method combines geometric and photometric information in order to recover dense correspondence between the images and accurately computes the 3D shape. Only a single pass starting at one point and local computation are used. This is in contrast to methods that use the occluding contours recovered from many images to initialize and constrain an optimization process. The output of our method can be used to initialize such processes. In the special case of fixed viewpoint, the proposed method becomes a new perspective photometric stereo algorithm. Nevertheless, the introduction of the multiview setup, self-occlusions, and regions close to the occluding boundaries are better handled, and the method is more robust to noise than photometric stereo. Experimental results are presented for simulated and real images.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号