首页 | 官方网站   微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   663篇
  免费   42篇
  国内免费   5篇
工业技术   710篇
  2024年   1篇
  2023年   21篇
  2022年   34篇
  2021年   53篇
  2020年   28篇
  2019年   45篇
  2018年   45篇
  2017年   36篇
  2016年   35篇
  2015年   24篇
  2014年   30篇
  2013年   37篇
  2012年   26篇
  2011年   43篇
  2010年   31篇
  2009年   28篇
  2008年   16篇
  2007年   24篇
  2006年   16篇
  2005年   7篇
  2004年   9篇
  2003年   5篇
  2002年   9篇
  2001年   8篇
  2000年   7篇
  1999年   4篇
  1998年   6篇
  1997年   11篇
  1996年   9篇
  1995年   5篇
  1994年   11篇
  1993年   8篇
  1992年   3篇
  1991年   1篇
  1990年   2篇
  1987年   2篇
  1986年   4篇
  1985年   3篇
  1984年   3篇
  1983年   4篇
  1982年   4篇
  1981年   3篇
  1980年   1篇
  1977年   2篇
  1976年   2篇
  1975年   2篇
  1974年   1篇
  1972年   1篇
排序方式: 共有710条查询结果,搜索用时 15 毫秒
1.
Cost-effectiveness ratios usually appear as point estimates without confidence intervals, since the numerator and denominator are both stochastic and one cannot estimate the variance of the estimator exactly. The recent literature, however, stresses the importance of presenting confidence intervals for cost-effectiveness ratios in the analysis of health care programmes. This paper compares the use of several methods to obtain confidence intervals for the cost-effectiveness of a randomized intervention to increase the use of Medicaid's Early and Periodic Screening, Diagnosis and Treatment (EPSDT) programme. Comparisons of the intervals show that methods that account for skewness in the distribution of the ratio estimator may be substantially preferable in practice to methods that assume the cost-effectiveness ratio estimator is normally distributed. We show that non-parametric bootstrap methods that are mathematically less complex but computationally more rigorous result in confidence intervals that are similar to the intervals from a parametric method that adjusts for skewness in the distribution of the ratio. The analyses also show that the modest sample sizes needed to detect statistically significant effects in a randomized trial may result in confidence intervals for estimates of cost-effectiveness that are much wider than the boundaries obtained from deterministic sensitivity analyses.  相似文献   
2.
A generalized mapping strategy that uses a combination of graph theory, mathematical programming, and heuristics is proposed. The authors use the knowledge from the given algorithm and the architecture to guide the mapping. The approach begins with a graphical representation of the parallel algorithm (problem graph) and the parallel computer (host graph). Using these representations, the authors generate a new graphical representation (extended host graph) on which the problem graph is mapped. An accurate characterization of the communication overhead is used in the objective functions to evaluate the optimality of the mapping. An efficient mapping scheme is developed which uses two levels of optimization procedures. The objective functions include minimizing the communication overhead and minimizing the total execution time which includes both computation and communication times. The mapping scheme is tested by simulation and further confirmed by mapping a real world application onto actual distributed environments  相似文献   
3.
26 clinician trainees' recollections of experiences in a diagnostic preschool program were analyzed in terms of strength and weaknesses of the program.  相似文献   
4.
Multimedia Tools and Applications - Automated bank cheque verification using image processing is an attempt to complement the present cheque truncation system, as well as to provide an alternate...  相似文献   
5.
Dimensional scaling approaches are widely used to develop multi-body human models in injury biomechanics research. Given the limited experimental data for any particular anthropometry, a validated model can be scaled to different sizes to reflect the biological variance of population and used to characterize the human response. This paper compares two scaling approaches at the whole-body level: one is the conventional mass-based scaling approach which assumes geometric similarity; the other is the structure-based approach which assumes additional structural similarity by using idealized mechanical models to account for the specific anatomy and expected loading conditions. Given the use of exterior body dimensions and a uniform Young’s modulus, the two approaches showed close values of the scaling factors for most body regions, with 1.5 % difference on force scaling factors and 13.5 % difference on moment scaling factors, on average. One exception was on the thoracic modeling, with 19.3 % difference on the scaling factor of the deflection. Two 6-year-old child models were generated from a baseline adult model as application example and were evaluated using recent biomechanical data from cadaveric pediatric experiments. The scaled models predicted similar impact responses of the thorax and lower extremity, which were within the experimental corridors; and suggested further consideration of age-specific structural change of the pelvis. Towards improved scaling methods to develop biofidelic human models, this comparative analysis suggests further investigation on interior anatomical geometry and detailed biological material properties associated with the demographic range of the population.  相似文献   
6.
The continuous improvement strategies necessary in today's climate of random change mandate that IT organizations transform the way they relate to other business units, deploy new technology, and organize and develop their people. Crafting an individualized transformation program that balances efforts in each of these areas will help IT managers add value to the bottom line while achieving real gains in customer service and productivity.  相似文献   
7.
As computational clusters increase in size, their mean time to failure reduces drastically. Typically, checkpointing is used to minimize the loss of computation. Most checkpointing techniques, however, require central storage for storing checkpoints. This results in a bottleneck and severely limits the scalability of checkpointing, while also proving to be too expensive for dedicated checkpointing networks and storage systems. We propose a scalable replication-based MPI checkpointing facility. Our reference implementation is based on LAM/MPI; however, it is directly applicable to any MPI implementation. We extend the existing state of fault-tolerant MPI with asynchronous replication, eliminating the need for central or network storage. We evaluate centralized storage, a Sun-X4500-based solution, an EMC storage area network (SAN), and the Ibrix commercial parallel file system and show that they are not scalable, particularly after 64 CPUs. We demonstrate the low overhead of our checkpointing and replication scheme with the NAS Parallel Benchmarks and the High-Performance LINPACK benchmark with tests up to 256 nodes while demonstrating that checkpointing and replication can be achieved with a much lower overhead than that provided by current techniques. Finally, we show that the monetary cost of our solution is as low as 25 percent of that of a typical SAN/parallel-file-system-equipped storage system.  相似文献   
8.
9.
In a rapidly changing IT environment, IT professionals need to keep abreast of technological knowledge. We examined how well this is achieved by developing a motivational model of “technological knowledge renewal effectiveness.” We hypothesized that (1) renewal effectiveness was influenced by the IT professional's career orientation, perceived IT dynamism, tolerance of ambiguity, delegation; and (2) that this positively affected both intrinsic and extrinsic job satisfaction. Survey data from 126 IT professionals was used to test the hypotheses. The results generally supported the research model. We discussed the implications of these results in both research and practice.  相似文献   
10.
Epilepsy is one of the most common neurological disorders characterized by transient and unexpected electrical disturbance of the brain. The electroencephalogram (EEG) is an invaluable measurement for the purpose of assessing brain activities, containing information relating to the different physiological states of the brain. It is a very effective tool for understanding the complex dynamical behavior of the brain. This paper presents the application of empirical mode decomposition (EMD) for analysis of EEG signals. The EMD decomposes a EEG signal into a finite set of bandlimited signals termed intrinsic mode functions (IMFs). The Hilbert transformation of IMFs provides analytic signal representation of IMFs. The area measured from the trace of the analytic IMFs, which have circular form in the complex plane, has been used as a feature in order to discriminate normal EEG signals from the epileptic seizure EEG signals. It has been shown that the area measure of the IMFs has given good discrimination performance. Simulation results illustrate the effectiveness of the proposed method.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号