首页 | 官方网站   微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   100篇
  免费   7篇
  国内免费   1篇
工业技术   108篇
  2023年   1篇
  2021年   2篇
  2020年   2篇
  2019年   7篇
  2018年   11篇
  2017年   5篇
  2016年   8篇
  2015年   1篇
  2014年   4篇
  2013年   10篇
  2012年   4篇
  2011年   7篇
  2010年   6篇
  2009年   2篇
  2008年   10篇
  2007年   8篇
  2006年   3篇
  2005年   3篇
  2004年   4篇
  2003年   3篇
  2002年   3篇
  2001年   1篇
  1996年   2篇
  1995年   1篇
排序方式: 共有108条查询结果,搜索用时 15 毫秒
1.
Data distribution management (DDM) plays a key role in traffic control for large-scale distributed simulations. In recent years, several solutions have been devised to make DDM more efficient and adaptive to different traffic conditions. Examples of such systems include the region-based, fixed grid-based, and dynamic grid-based (DGB) schemes, as well as grid-filtered region-based and agent-based DDM schemes. However, less effort has been directed toward improving the processing performance of DDM techniques. This paper presents a novel DDM scheme called the adaptive dynamic grid-based (ADGB) scheme that optimizes DDM time through the analysis of matching performance. ADGB uses an advertising scheme in which information about the target cell involved in the process of matching subscribers to publishers is known in advance. An important concept known as the distribution rate (DR) is devised. The DR represents the relative processing load and communication load generated at each federate. The DR and the matching performance are used as part of the ADGB method to select, throughout the simulation, the devised advertisement scheme that achieves the maximum gain with acceptable network traffic overhead. If we assume the same worst case propagation delays, when the matching probability is high, the performance estimation of ADGB has shown that a maximum efficiency gain of 66% can be achieved over the DGB scheme. The novelty of the ADGB scheme is its focus on improving performance, an important (and often forgotten) goal of DDM strategies.  相似文献   
2.
In recent years, we have witnessed a growing interest in the synchronous collaboration based class of applications. Several techniques for collaborative virtual environments (CVE), haptic, audio and visual environments (C-HAVE) have been designed. However, several challenging issues remain to be resolved before CVE and C-HAVE become a common place. In this paper, we focus on applications that are based on closely coupled and highly synchronized haptic tasks that require a high-level of coordination among the participants. Four main protocols have been designed to resolve the synchronization issues in such environments: the synchronous collaboration transport protocol, the selective reliable transmission protocol, the reliable multicast transport protocol and the scalable reliable multicast. While these four protocols have shown good performance for CVE and C-HAVE class of applications, none of these protocols has been able to meet all of the basic CVE requirements, i.e., scalability, reliability, synchronization, and minimum delay. In this paper, we present a hybrid protocol that is able to satisfy all of the CVE and C-HAVE requirements and discuss its implementation and results in two tele-surgery applications. This work is partially supported by Grants from Canada Research Chair Program, NSERC, OIT/Ontario Distinguished Researcher Award, Early Research Award and ORNEC Research Grant.  相似文献   
3.
4.
We consider sensor networks where the sensor nodes are attached on entities that move in a highly dynamic, heterogeneous manner. To capture this mobility diversity we introduce a new network parameter, the direction-aware mobility level, which measures how fast and close each mobile node is expected to get to the data destination (the sink). We then provide local, distributed data dissemination protocols that adaptively exploit the node mobility to improve performance. In particular, “high” mobility is used as a low cost replacement for data dissemination (due to the ferrying of data), while in the case of “low” mobility either (a) data propagation redundancy is increased (when highly mobile neighbors exist) or (b) long-distance data transmissions are used (when the entire neighborhood is of low mobility) to accelerate data dissemination toward the sink. An extensive performance comparison to relevant methods from the state of the art demonstrates significant improvements, i.e. latency is reduced by even four times while keeping energy dissipation and delivery success at very satisfactory levels.  相似文献   
5.
One of the most challenging issues facing vehicular networks lies in the design of an efficient MAC protocol due to the mobile nature of nodes and the interference associated with the dynamic environment. Moreover delay constraints for safety applications add complexity and latency requirements to the design. Existing MAC protocols overcome some challenges however don’t provide an integrated solution. Hence, the merit if this work lies in designing an efficient MAC protocol that incorporates various VANet’s challenges in a complete end-to-end solution. In this work, we propose an efficient Multichannel QoS Cognitive MAC (MQOG). MQOG assesses the quality of channel prior to transmission employing dynamic channel allocation and negotiation algorithms to achieve significant increase in channel reliability, throughput and delay constraints while simultaneously addressing Quality of Service. The uniqueness of MQOG lies in making use of the free unlicensed bands. The proposed protocols were implemented in OMNET++ 4.1, and extensive experiments demonstrated a faster and more efficient reception of safety messages compared to existing VANet MAC Protocols. Finally, improvements in delay, packet delivery ratios and throughput were captured.  相似文献   
6.
Silicene, a new 2D material has attracted intense research because of the ubiquitous use of silicon in modern technology. However, producing free-standing silicene has proved to be a huge challenge. Until now, silicene could be synthesized only on metal surfaces where it naturally forms strong interactions with the metal substrate that modify its electronic properties. Here, the authors report the first experimental evidence of silicene nanoribbons on an insulating NaCl thin film. This work represents a major breakthrough, for the study of the intrinsic properties of silicene, and by extension to other 2D materials that have so far only been grown on metal surfaces.  相似文献   
7.
This paper is focused on the estimation of the effect of root pass chemical composition, in multi-pass GTA Weldments, on microstructure and mechanical properties of duplex stainless steel welds. We used two different filler metals, the super duplex ER 2594 and duplex ER 2209. Microstructures of different passes of welded joints are investigated using optical microscope and scanning electron microscope. The relationship between mechanical properties, corrosion resistance, and microstructure of welded joints is evaluated. It is found that the tensile and toughness properties of the first weldment, employing the combination of ER 2594 in the root pass and ER 2209 in the remaining, are better than that of the second weldment employing ER 2209 all passes, due to the root pass grains refinement and its alloy elements content as chromium Cr and nitrogen N. The microstructure indicates the presence of austenite in different forms on the weld zone of ER 2209, same in the case of ER 2594, but with higher content and finer grains size, in particular Widmanstätten austenite WA. Potentiodynamic polarization tests of the first weld metal evaluated in 3.5% NaCl solution at room temperature have been demonstrated a corrosion resistance higher than that of the second weld metal. This work addressed the improvement of the corrosion resistance using appropriate filler metal without getting any structural heterogeneity and detrimental changes in the mechanical properties.  相似文献   
8.
Aggregation/disaggregation is a method for implementing multi-resolution simulations within a High Level Architecture (HLA) federation. HLA is a U.S. Department of Defense (DoD) developed standard to facilitate linking different types of simulations, in various locations, to form an or interactive, full-scale simulation, called a federation. Data Distribution Management (DDM) is a High Level Architecture/Run-time Infrastructure (HLA/RTI) service that manages the distribution of state updates and interaction information and controls the volume of data exchanged, in large-scale distributed simulations. The purpose of HLA is to promote interoperability and reuse among heterogenous simulations, including those simulations that offer varied levels of resolution, to provide practical training to military personnel of different ranks. The purpose of Aggregation/disaggregation is to ensure consistency in state updates between federates simulating objects at various levels of resolution. This paper focuses on the scalability of aggregation/disaggregation with different DDM implementations and examines the effects, on performance of large-scale simulations. We implement a federate-based aggregation/disaggregation scheme, originally introduced in [TAN01], with a tank dogfight scenario, aggregating five tanks into one tank battalion and disaggregating the battalion back into five individual entities (tanks). The DDM methods we analyze consist of the Fixed Grid-Based method, the Dynamic Grid-Based method and the Region-Based method. In [TAN01], testing of this federate-based aggregation/disaggregation was limited to a dual federation and a single DDM scheme. In an effort to determine the scalability of aggregation/disaggregation, with three methods of DDM, we measure the communication overhead and analyze performance during a federation execution. We present the results of extensive testing, varying the number of aggregation/disaggregation requests, the number of multi-resolution federates participating in the federation, the number of objects, the number/size of the grids and report on the performance evlauation of our protocols using an extensive set of simulation experiments. This work was partially supported by Grants from NSERC, Canada Research Chairs Program, Canada Foundation for Innovation, OIT/Distinguished Researcher Award.  相似文献   
9.
10.
The contribution of non-linear orthogonal regression for estimation of individual pharmacokinetic parameters when drug concentrations and sampling times are subject to error was studied. The first objective was to introduce and compare four numerical approaches that involve different degrees of approximation for parameter estimation by orthogonal regression. The second objective was to compare orthogonal with non-orthogonal regression. These evaluations were based on simulated data sets from 300 'subjects', thereby enabling precision and accuracy of parameter estimates to be determined. The pharmacokinetic model was a one-compartment open model with first-order absorption and elimination rates. The inter-individual coefficients of variation (CV) of the pharmacokinetic parameters were in the range 33-100%. Eight measurement-error models for times and concentrations (homo- or heteroscedastic with constant CV) were considered. Accuracy of the four algorithms was very close in almost all instances (typical bias, 1-4%). Precision showed three expected trends: root mean squared error (RMSE) increased when the residual error was larger or the number of observations was smaller, and it was highest for the absorption rate constant and common error variance. Overall, RMSE ranged from 5 to 40%. It was found that the simplest algorithm for othogonal regression performed as well as the more complicated approaches. Errors in sampling time resulted in an increased bias and imprecision in individual parameter estimates (especially for k(a) in our example) and in common error variance when the estimation method did not take into account these errors. In this situation, use of orthogonal regression resulted in smaller bias and better precision.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号