全文获取类型
收费全文 | 536篇 |
免费 | 78篇 |
国内免费 | 86篇 |
学科分类
工业技术 | 700篇 |
出版年
2024年 | 3篇 |
2023年 | 11篇 |
2022年 | 24篇 |
2021年 | 21篇 |
2020年 | 25篇 |
2019年 | 25篇 |
2018年 | 18篇 |
2017年 | 24篇 |
2016年 | 24篇 |
2015年 | 31篇 |
2014年 | 43篇 |
2013年 | 61篇 |
2012年 | 30篇 |
2011年 | 50篇 |
2010年 | 37篇 |
2009年 | 38篇 |
2008年 | 34篇 |
2007年 | 36篇 |
2006年 | 25篇 |
2005年 | 35篇 |
2004年 | 24篇 |
2003年 | 15篇 |
2002年 | 15篇 |
2001年 | 11篇 |
2000年 | 4篇 |
1999年 | 7篇 |
1998年 | 4篇 |
1997年 | 1篇 |
1996年 | 3篇 |
1995年 | 7篇 |
1994年 | 4篇 |
1992年 | 2篇 |
1990年 | 1篇 |
1988年 | 1篇 |
1987年 | 1篇 |
1985年 | 1篇 |
1981年 | 1篇 |
1959年 | 3篇 |
排序方式: 共有700条查询结果,搜索用时 15 毫秒
101.
采用面向对象模块化编程技术开发了面向大规模热工水力计算的自主化子通道程序SUBSC,利用SUBSC和COBRA程序分别计算了典型压水堆1/4组件,结果表明,两者计算结果吻合很好。为进一步验证SUBSC程序,计算了PSBT稳态5×5棒束基准题,结果表明,在各种工况下SUBSC程序计算得到的通道平均含汽率与实验测量值吻合很好,最大相对偏差仅为0.7%,证明了程序具有较高的计算精度。为提高SUBSC程序的计算效率,引入不完全LU分解预处理的再启动GMRES算法求解质量守恒方程,对多组件的计算结果表明,SUBSC程序具备大规模热工水力计算能力。 相似文献
102.
SARAX-FXS程序是基于确定论方法,适用于快谱堆芯组件能谱、均匀化参数计算的程序。由于快堆中组件空间自屏的非均匀效应不可忽视,本文将基于一维圆柱、平板几何的碰撞概率方法加入SARAX-FXS模块,并以等效一维模型计算组件的均匀化参数。为保证能群归并前后的核反应率守恒,在组件计算中引入超级均匀化(SPH)因子修正截面。采用快堆基准题MET-1000对程序的计算结果进行验证,结果表明,与参考解相比,SARAX-FXS的一维计算模块具有较高的精度,特征值计算相对偏差在100~200pcm之间。堆芯计算结果显示,引入SPH因子可提高特征值计算的精度约300pcm,功率分布的均方根误差可从约3%下降至约1%。 相似文献
103.
《Journal of Nuclear Science and Technology》2013,50(11):1125-1136
Evaluation for JENDL-3.3 has been performed by considering the accumulated feedback information and various benchmark tests of the previous library JENDL-3.2. The major problems of the JENDL-3.2 data were solved by the new library: overestimation of criticality values for thermal fission reactors was improved by the modifications of fission cross sections and fission neutron spectra for 235U; incorrect energy distributions of secondary neutrons from important heavy nuclides were replaced with statistical model calculations; the inconsistency between elemental and isotopic evaluations was removed for medium-heavy nuclides. Moreover, covariance data were provided for 20 nuclides. The reliability of JENDL-3.3 was investigated by the benchmark analyses on reactor and shielding performances. The results of the analyses indicate that JENDL-3.3 predicts various reactor and shielding characteristics better than JENDL- 3.2. 相似文献
104.
《Journal of Nuclear Science and Technology》2013,50(10):1237-1244
Benchmark calculations for several HTTR core states were performed with four cross-section sets which were generated from JENDL-3.3, JENDL-3.2, ENDF/B-VI.8 and JEFF-3.0 using a continuous energy Monte Carlo code MVP. The core states were a critical approach in which an annular core was formed at room temperature and solid cores at room temperature and at full power operation. Study of keff discrepancies caused by difference of the nuclear data libraries and identification of nuclides which have large effects on the keff discrepancies were carried out. Comparison of the respective keff from calculations and experiments was also carried out. As the results, for each of the HTTR core states, JENDL-3.3 yields a keff agreeing with the experiments within 1.5%Δk, JENDL-3.2 yields keff agreement within 1.7%Δk, and ENDF/B-VI.8 and JEFF-3.0 yield keff agreement within 1.8%Δk. There is little keff discrepancy between ENDF/B-VI.8 and JEFF-3.0. The keff between JENDL-3.3 and JENDL-3.2 is caused by difference of 235U data and has temperature dependency. The keff discrepancy between JENDL-3.3 and ENDF/B-VI.8 or JEFF-3.0 is mainly caused by difference in graphite data. 相似文献
105.
Performance assessment and optimisation for MPC have attracted much research interest. As a typical performance assessment benchmark, the LQG benchmark is regressed from asymmetrical points, leading to unnecessary computation and unsatisfactory regression results. To tackle this problem, an equigrid LQG benchmark was proposed for the two‐layer MPC assessment and optimisation, and the optimal setpoint for MPC was calculated to replace the experiential one. Then the LQG benchmark for sensitivity analysis was introduced. Economic performance assessment of the control system in a delayed coking furnace shows the effectiveness of the proposed approach. © 2012 Canadian Society for Chemical Engineering 相似文献
106.
《Fusion Engineering and Design》2014,89(9-10):2174-2178
3D Monte Carlo transport codes are extensively used in neutronic analysis, especially in radiation protection and shielding analyses for fission and fusion reactors. TRIPOLI-4® is a Monte Carlo code developed by CEA. The aim of this paper is to show its capability to model a large-scale fusion reactor with complex neutron source and geometry. A benchmark between MCNP5 and TRIPOLI-4®, on the ITER A-lite model was carried out; neutron flux, nuclear heating in the blankets and tritium production rate in the European TBMs were evaluated and compared. The methodology to build the TRIPOLI-4® A-lite model is based on MCAM and the MCNP A-lite model. Simplified TBMs, from KIT, were integrated in the equatorial-port. A good agreement between MCNP and TRIPOLI-4® is shown; discrepancies are mainly included in the statistical error. 相似文献
107.
《国际智能与纳米材料杂志》2013,4(3):224-235
An experimental benchmark is proposed for piezoelectric, direct-torsion actuation using mono-morph piezoceramic d15 shear patches. This is reached by designing and assembling an adaptive plate having two identical composite faces sandwiching a core made of connected six oppositely polarized (OP) piezoceramic d15 shear patches along the length. An electronic speckle pattern interferometry system was used to measure the static tip deflection of the adaptive sandwich composite plate that was mounted in a cantilever configuration and actuated in torsion by progressively applied voltages on the piezoceramic shear core electroded major surfaces. Then, the effective rate of twist was post-processed and proposed as an evaluation criterion for smart composites under piezoelectric torsion actuation. For verification of the experimental results, the proposed experimental benchmark was simulated using three-dimensional piezoelectric finite elements (FE) within ABAQUS® commercial software. The comparison of the obtained experimental and simulation results showed reasonable agreement, but the slight nonlinear experimental response was not confirmed by the linear FE analysis. The experimentally proved torsion actuation mechanism, produced by OP piezoceramic d15 shear patches, can be applied actively to prevent torsion in many applications, such as in wind turbines, helicopter blades, robot arms, flexible space structures, etc. 相似文献
108.
109.
110.
Verification and validation benchmarks 总被引:3,自引:0,他引:3
Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of achievement in V&V activities, how closely related the V&V benchmarks are to the actual application of interest, and the quantification of uncertainties related to the application of interest. 相似文献