全文获取类型
收费全文 | 1557篇 |
免费 | 34篇 |
学科分类
工业技术 | 1591篇 |
出版年
2022年 | 8篇 |
2021年 | 9篇 |
2020年 | 9篇 |
2019年 | 11篇 |
2018年 | 17篇 |
2017年 | 12篇 |
2016年 | 19篇 |
2015年 | 26篇 |
2014年 | 29篇 |
2013年 | 91篇 |
2012年 | 46篇 |
2011年 | 71篇 |
2010年 | 50篇 |
2009年 | 75篇 |
2008年 | 57篇 |
2007年 | 64篇 |
2006年 | 72篇 |
2005年 | 67篇 |
2004年 | 65篇 |
2003年 | 48篇 |
2002年 | 54篇 |
2001年 | 46篇 |
2000年 | 34篇 |
1999年 | 33篇 |
1998年 | 45篇 |
1997年 | 43篇 |
1996年 | 42篇 |
1995年 | 28篇 |
1994年 | 30篇 |
1993年 | 29篇 |
1992年 | 31篇 |
1991年 | 12篇 |
1990年 | 28篇 |
1989年 | 12篇 |
1988年 | 10篇 |
1987年 | 15篇 |
1986年 | 18篇 |
1985年 | 24篇 |
1984年 | 25篇 |
1983年 | 22篇 |
1982年 | 16篇 |
1981年 | 13篇 |
1980年 | 12篇 |
1979年 | 15篇 |
1978年 | 11篇 |
1977年 | 13篇 |
1976年 | 23篇 |
1975年 | 10篇 |
1974年 | 10篇 |
1973年 | 11篇 |
排序方式: 共有1591条查询结果,搜索用时 15 毫秒
31.
Testing Forecast Accuracy of Foreign Exchange Rates: Predictions from Feed Forward and Various Recurrent Neural Network Architectures 总被引:3,自引:1,他引:2
In this research, we work with data of futures contracts on foreign exchange rates for British pound (BP), Canadian dollar
(CD), and Japanese yen (JY) that are traded at the Chicago Mercantile Exchange (CME) against US dollars. We model relationships
between exchange rates in these currencies using linear models, feed forward artificial neural networks (ANN), and three versions
of recurrent neural networks (RNN1, RNN2 and RNN3) for predicting exchange rates in these currencies against the US dollar.
Our results on forecast evaluations based on AGS test the tests of forecast equivalence between any two competing models among
the entire models employed for each of the series show that ANN and the three versions of RNN models offer superior forecasts
for predicting BP, CD and JY exchange rates although the forecast evaluations based on MGN test are in sharp contrast. On
the other hand forecast based on SIGN test shows that ANN and all the versions of RNN models offer superior forecasts for
BP and CD in exception of JY exchange rates. The results for forecast evaluation for all the models for each of the series
based on summary measures of forecast evaluations show that RNN3 model appears to offer the most accurate predictions of BP
and RNN1 for JP exchange rates. However, none of the RNN models appear to be statistically superior to the benchmark (i.e.,
linear model) for predicting CD exchange rates.
相似文献
32.
33.
34.
Cigarette smoking and the colorectal adenoma-carcinoma sequence: a hypothesis to explain the paradox
As recognized precursor lesions to colorectal cancer, colorectal adenomatous polyps have been studied to enhance knowledge of colorectal cancer etiology. Although most of the known risk factors for colorectal cancer are also associated with the occurrence of colorectal adenomas, cigarette smoking has had a strong, consistent relationship with colorectal adenomas but is generally not associated with colorectal cancer. The explanation for this paradox is unknown. With data collected in 1986-1988 during a large case-control study based on colonoscopy results in New York City, New York, the authors investigated the possibility that the paradox may arise because subjects with colorectal adenomas were included in the control group of cancer case-control studies. The authors found a statistically significant increased risk between heavy cigarette smoking (smokers with > or = 40 pack-years of smoking) and risk of adenoma (odds ratio (OR) = 1.61, 95% confidence interval (CI) 1.06-2.44). They saw no increased colorectal cancer risk from heavy cigarette smoking (OR = 1.02, 95% CI 0.52-1.99) using a "manufactured" control group to simulate a typical unscreened, population-based control group. When the authors compared these colorectal cancer cases with an adenoma-free control group examined by colonoscopy in a polytomous model with several case groups (newly diagnosed adenomas, carcinoma in situ, intramucosal carcinoma, and colorectal cancer), they found that the risk for 20-39 pack-years of smoking was elevated, although not statistically significant, and was similar for all four case groups. The risk for the highest smoking category (> or = 40 pack-years) was more strongly elevated in all four case groups, although it was statistically significant for only the newly diagnosed adenoma and the carcinoma in situ cases (adenomas, OR = 1.59, 95% CI 1.05-2.42; carcinoma in situ, OR = 2.05, 95% CI 1.01-4.15; intramucosal carcinoma, OR = 1.30, 95% CI 0.61-2.77; and colorectal cancer, OR = 1.30, 95% CI 0.64-2.65). While the authors' study is weakened by the lack of statistical significance concerning risk for colorectal cancer, these data offer some support for the hypothesis that the association between cigarette smoking and risk of colorectal cancer may have been masked by inclusion in the control group of subjects with adenomas. They also suggest that the major effect of smoking on the colorectal adenoma-carcinoma sequence occurs in the earlier stages of the formation of adenoma and the development of carcinoma in situ. 相似文献
35.
Robert Withnall Jack Silver Paul G. Harris Terry G. Ireland Paul J. Marsh 《Journal of the Society for Information Display》2011,19(11):798-810
Abstract— The current status of AC powder electroluminescent (ACPEL) displays is reviewed with particular emphasis given to color and lifetime. The printing of the displays in forward and reverse architectures is also discussed, in addition to the fabrication of ACPEL displays with interdigitated electrodes, and different types of ACPEL phosphors and materials for back electrodes, transparent conducting electrodes, binders, and dielectrics are considered. Furthermore, shape conformable and highly flexible ACPEL displays are surveyed. 相似文献
36.
This paper presents a parameter sensitivity study of the Nelder-Mead Simplex Method for unconstrained optimization. Nelder-Mead Simplex Method is very easy to implement in practice, because it does not require gradient computation; however, it is very sensitive to the choice of initial points selected. Fan-Zahara conducted a sensitivity study using a select set of test cases and suggested the best values for the parameters based on the highest percentage rate of successful minimization. Begambre-Laier used a strategy to control the Particle Swarm Optimization parameters based on the Nelder Mead Simplex Method in identifying structural damage. The main purpose of the paper is to extend their parameter sensitivity study to better understand the parameter’s behavior. The comprehensive parameter sensitivity study was conducted on seven test functions: B2, Beale, Booth, Wood, Rastrigin, Rosenbrock and Sphere Functions to search for common patterns and relationships each parameter has in producing the optimum solution. The results show important relations of the Nelder-Mead Simplex parameters: reflection, expansion, contraction, and Simplex size and how they impact the optimum solutions. This study is crucial, because better understanding of the parameters behavior can motivate current and future research using Nelder-Mead Simplex in creating an intelligent algorithm, which can be more effective, efficient, and save computational time. 相似文献
37.
This paper presents results on a new hybrid optimization method which combines the best features of four traditional optimization methods together with an intelligent adjustment algorithm to speed convergence on unconstrained and constrained optimization problems. It is believed that this is the first time that such a broad array of methods has been employed to facilitate synergistic enhancement of convergence. Particle swarm optimization is based on swarm intelligence inspired by the social behavior and movement dynamics of bird flocking, fish schooling, and swarming theory. This method has been applied for structural damage identification, neural network training, and reactive power optimization. It is also believed that this is the first time an intelligent parameter adjustment algorithm has been applied to maximize the effectiveness of individual component algorithms within the hybrid method. A comprehensive sensitivity analysis of the traditional optimization methods within the hybrid group is used to demonstrate how the relationship among the design variables in a given problem can be used to adjust algorithm parameters. The new method is benchmarked using 11 classical test functions and the results show that the new method outperforms eight of the most recently published search methodologies. 相似文献
38.
Terry Anderson John Loftus Narad Rampersad Nicolae Santean Jeffrey Shallit 《Information and Computation》2009,207(11):1096-1118
Given a language L and a non-deterministic finite automaton M, we consider whether we can determine efficiently (in the size of M) if M accepts at least one word in L, or infinitely many words. Given that M accepts at least one word in L, we consider how long a shortest word can be. The languages L that we examine include the palindromes, the non-palindromes, the k-powers, the non-k-powers, the powers, the non-powers (also called primitive words), the words matching a general pattern, the bordered words, and the unbordered words. 相似文献
39.
Loren Paul ReesAuthor VitaeJason K. DeaneAuthor Vitae Terry R. RakesAuthor VitaeWade H. BakerAuthor Vitae 《Decision Support Systems》2011,51(3):493-505
Security countermeasures help ensure the confidentiality, availability, and integrity of information systems by preventing or mitigating asset losses from Cybersecurity attacks. Due to uncertainty, the financial impact of threats attacking assets is often difficult to measure quantitatively, and thus it is difficult to prescribe which countermeasures to employ. In this research, we describe a decision support system for calculating the uncertain risk faced by an organization under cyber attack as a function of uncertain threat rates, countermeasure costs, and impacts on its assets. The system uses a genetic algorithm to search for the best combination of countermeasures, allowing the user to determine the preferred tradeoff between the cost of the portfolio and resulting risk. Data collected from manufacturing firms provide an example of results under realistic input conditions. 相似文献
40.
Rapid scalar value classification and volume clipping for?interactive 3D medical image visualization
In many clinical scenarios, medical data visualization and interaction are important to physicians for exploring inner anatomical
structures and extracting meaningful diagnostic information. Real-time high-quality volume rendering, artifact-free clipping,
and rapid scalar value classification are important techniques employed in this process. Unfortunately, in practice, it is
still difficult to achieve an optimal balance. In this paper, we present some strategies to address this issue, which are
based on the calculation of segment-based post color attenuation and dynamic ray–plane intersection (RPI) respectively. When
implemented within our visualization system, the new classification algorithm can deliver real-time performance while avoiding
the “color over-accumulation” artifacts suffered by the commonly used acceleration algorithms that employ pre-integrated classification.
Our new strategy can achieve an optimized balance between image quality and classification speed. Next, the RPI algorithm
is used with opacity adjustment technique to effectively remove the “striping” artifacts on the clipping plane caused by the
nonuniform integration length. Furthermore, we present techniques for multiple transfer function (TF) based anatomical feature
enhancement and “keyhole” based endoscopic inner structure view. Finally, the algorithms are evaluated subjectively by radiologists
and quantitatively compared using image power spectrum analysis. 相似文献