首页 | 官方网站   微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   124篇
  免费   5篇
  国内免费   2篇
工业技术   131篇
  2023年   3篇
  2022年   3篇
  2021年   7篇
  2020年   7篇
  2019年   10篇
  2018年   6篇
  2017年   8篇
  2016年   5篇
  2015年   7篇
  2014年   7篇
  2013年   13篇
  2012年   8篇
  2011年   8篇
  2010年   5篇
  2009年   7篇
  2008年   6篇
  2007年   3篇
  2006年   3篇
  2005年   3篇
  2004年   2篇
  2003年   5篇
  1999年   2篇
  1996年   1篇
  1989年   1篇
  1986年   1篇
排序方式: 共有131条查询结果,搜索用时 609 毫秒
91.
Assurance is commonly considered as “something said or done to inspire confidence” (Webster dictionary). However, the level of confidence inspired from a statement or an action depends on the quality of its source. Similarly, the assurance that the deployed security mechanisms exhibit an appropriate posture depends on the quality of the verification process adopted. This paper presents a novel taxonomy of quality metrics pertinent for gaining assurance in a security verification process. Inspired by the systems security engineering capability maturity model and the common criteria, we introduce five ordinal quality levels for a verification process aimed at probing the correctness of runtime security mechanisms. In addition, we analyse the mapping between the quality levels and different capability levels of the following verification metrics families: coverage, rigour, depth and independence of verification. The quality taxonomy is part of a framework for the Security Assurance of operational systems. These metrics can also be used for gaining assurance in other areas such as legal and safety compliance. Furthermore, the resulting metrics taxonomy could, by identifying appropriate quality security requirements, assist manufacturers of information technology (IT) in developing their products or systems. Additionally, the taxonomy could also empower consumers in IT security product selection to efficaciously and effectively match their organisational needs, while IT security evaluators can use it as a reference point when forming judgments about the quality of a security product. We demonstrate the applicability of the proposed taxonomy through access control examples.  相似文献   
92.

With the popularity of mobile devices, the next generation of mobile networks has faced several challenges. Different applications have been emerged, with different requirements. Offering an infrastructure that meets different types of applications with specific requirements is one of these issues. In addition, due to user mobility, the traffic generated by the mobile devices in a specific location is not constant, making it difficult to reach the optimal resource allocation. In this context, network function virtualization (NFV) can be used to deploy the telecommunication stacks as virtual functions running on commodity hardware to meet users’ requirements such as performance and availability. However, the deployment of virtual functions can be a complex task. To select the best placement strategy that reduces the resource usage, at the same time keeps the performance and availability of network functions is a complex task, already proven to be an NP-hard problem. Therefore, in this paper, we formulate the NFV placement as a multi-objective problem, where the risk associated with the placement and energy consumption are taken into consideration. We propose the usage of two optimization algorithms, NSGA-II and GDE3, to solve this problem. These algorithms were taken into consideration because both work with multi-objective problems and present good performance. We consider a triathlon circuit scenario based on real data from the Ironman route as an use case to evaluate and compare the algorithms. The results show that GDE3 is able to attend both objectives (minimize failure and minimize energy consumption), while the NSGA-II prioritizes energy consumption.

  相似文献   
93.
Consider the problem of finding the highly correlated pairs of time series over a time window and then sliding that window to find the highly correlated pairs over successive co-temporous windows such that each successive window starts only a little time after the previous window. Doing this efficiently and in parallel could help in applications such as sensor fusion, financial trading, or communications network monitoring, to name a few. We have developed a parallel incremental random vector/sketching approach to this problem and compared it with the state-of-the-art nearest neighbor method iSAX. Whereas iSAX achieves 100% recall and precision for Euclidean distance, the sketching approach is, empirically, at least 10 times faster and achieves 95% recall and 100% precision on real and simulated data. For many applications this speedup is worth the minor reduction in recall. Our method scales up to 100 million time series and scales linearly in its expensive steps (but quadratic in the less expensive ones).  相似文献   
94.
ABSTRACT

In this study, the main objective is the elimination of Basic Red 46 dye by coupling two processes, adsorption on activated clay followed by photocatalysis over ZnO as photocatalyst. The adsorption was investigated under different conditions of pH, adsorbent dose, dye concentrations, and temperature. The best adsorption yield occurs at neutral pH ~ 7 within 60 min with an uptake percentage of 97% for a concentration of 25 mg/L and a dose of 0.5 g/L. The results at equilibrium were successfully described by the Langmuir model with an adsorption capacity of 175 mg/g. To investigate the mechanism of dye adsorption characteristic, the adsorption constants were determined using pseudo first order, pseudo second-order and intraparticle diffusion model. It was found that the Basic Red 46 dye adsorption is well described by the pseudo second-order kinetic. The second part of this work was dedicated to the photodegradation onto ZnO under solar irradiation of the residual BR 46 concentration, remained after adsorption. For the remaining concentrations, the removal yields reach 100% under.  相似文献   
95.
Data mining and knowledge discovery aim at producing useful and reliable models from the data. Unfortunately some databases contain noisy data which perturb the generalization of the models. An important source of noise consists of mislabelled training instances. We offer a new approach which deals with improving classification accuracies by using a preliminary filtering procedure. An example is suspect when in its neighbourhood defined by a geometrical graph the proportion of examples of the same class is not significantly greater than in the database itself. Such suspect examples in the training data can be removed or relabelled. The filtered training set is then provided as input to learning algorithms. Our experiments on ten benchmarks of UCI Machine Learning Repository using 1-NN as the final algorithm show that removal gives better results than relabelling. Removing allows maintaining the generalization error rate when we introduce from 0 to 20% of noise on the class, especially when classes are well separable. The filtering method proposed is finally compared to the relaxation relabelling schema.  相似文献   
96.
The architecture of the Ouargla Ksar has long been recognized as an example of adaptation to harsh climate in the deep desert of Algeria. Over the last few decades, it has undergone some changes in its initial urban structure, due to accelerated and uncontrolled modern urbanization, that had negative implications and devaluation of its thermal characteristics. This article tries to understand the bioclimatic concept of adaptation of this Ksar and assess to what extent its morphological transformation has impacted its microclimatic conditions. This assessment is made through a comparative study between two different areas: one untransformed and the other transformed. A referential weather station situated in the suburbs was also exploited for consolidation of comparison. The investigation was undertaken using site measurements and observations.  相似文献   
97.
Single crystals of (1 − x)BaTiO3 + xNaNbO3 (BTNN) for x = 0.84 were obtained by high temperature solution growth using Na2B4O7 as solvent. The room temperature crystal structure of BTNN 16/84-phase was determined from X-ray single crystal diffraction data, in the tetragonal system with space group P4bm. The refinement from 246 independent reflections led to the following parameters: a = b = 5.5845(3) Å, c = 3.9453(2) Å, V = 123.041(11) Å3, Z = 2, with final cRwp = 0.150 and RB = 0.041. The structure of BTNN 16/84-phase can be described as a three-dimensional framework built up from (Nb-Ti)O6 octahedra with Na and Ba in the dodecahedral site of perovskite-like type. Some mm3-sized crystals have been selected and various dielectric measurements (ferroelectric, pyroelectric, and piezoelectric) have been performed. Transition from paraelectric to ferroelectric state at around 460 K has been observed to be in good agreement with ceramics of closer composition. Dielectric, piezoelectric and pyroelectric measurements on crystal confirm the ferroelectric behaviour of BTNN 16/84.  相似文献   
98.
Multimedia Tools and Applications - The introduction of technological innovations is essential for accident mitigation in work environments. In a human-robot collaboration scenario, the current...  相似文献   
99.
Next-generation cloud data centers are based on software-defined data center infrastructures that promote flexibility, automation, optimization, and scalability. The Redfish standard and the Intel Rack Scale Design technology enable software-defined infrastructure and disaggregate bare-metal compute, storage, and networking resources into virtual pools to dynamically compose resources and create virtual performance-optimized data centers (vPODs) tailored to workload-specific demands. This article proposes four chassis design configurations based on Distributed Management Task Force's Redfish industry standard applied to compose vPOD systems, namely, a fully shared design, partially shared homogeneous design, partially shared heterogeneous design, and not shared design; their main difference is based on the used hardware disaggregation level. Furthermore, we propose models that combine reliability block diagram and stochastic Petri net modeling approaches to represent the complexity of the relationship between the pool of disaggregated hardware resources and their power and cooling sources in a vPOD. These four proposed design configurations were analyzed and compared in terms of availability and component's sensitivity indexes by scaling their configurations considering different data center infrastructure. From the obtained results, we can state that, in general, when one increases the hardware disaggregation, availability is improved. However, after a given point, the availability level of the fully shared, partially shared homogeneous, and partially shared heterogeneous configurations remain almost equal, while the not shared configuration is still able to improve its availability.  相似文献   
100.
This study tries to identify the coil parameters using numerical methods. The eddy current testing (ECT) is used for evaluation of a crack with the aid of numerical simulations by utilizing the identification of these parameters. In this study, a comparison of the performance of the GA and SPSA algorithms to identify the parameter values of the coil sensors are presented. So, the optimization probe geometry is introduced in the simulation with Three-dimensional finite element simulations (FLUX finite element code) were conducted to obtain eddy current signals resulting from a crack in a plate made of aluminium. The simulation results are compared with experimental measurements for the defect present in a plate.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号