首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
2.
The current computing facilities at TUNL have been in operation for 14 years. Design considerations for a new computer system, to be installed and operational by mid 1980, are discussed. The new large memory 32 bit minicomputers appear to provide the most cost effective way of handling the data acquisition and analysis needs of a medium size nuclear physics laboratory.  相似文献   

3.
The concentration preprocessing and fan-out(CPPF) system is one of the electronic subsystems of the upgraded Compact Muon Solenoid(CMS) Level-1 trigger system. It includes, in hardware, eight specially designed CPPF cards, one CMS card called AMC13, one commercial Micro-TCA Carrier HUB(MCH) card, and a MicroTCA shelf. Powerful online software is needed for the system, including providing reliable configuration and monitoring for the hardware, and a graphical interface for executing all actions and publishing monitoring messages.Further, to control and monitor the large amount of homogeneous hardware, the SoftWare Automating conTrol of Common Hardware(SWATCH) concept was proposed and developed. The SWATCH provides a generic structure and is flexible for customization. The structure includes a hardware access library based on the IPbus protocol, which assumes a virtual 32-bit address/32-bit data bus and builds a simple hardware access layer. Furthermore, the structure provides a graphical user interface, which is based on modern web technology and is accessible by web page. The CPPF controlling and monitoring online software was also customized from a common SWATCH cell, and provides afinite state machine(FSM) for configuring the entire CPPF hardware, and five monitoring objects for periodically collecting monitoring data from five main functional modules in the CPPF hardware. This paper introduces the details of the CPPF SWATCH cell development.  相似文献   

4.
The Trigger Supervisor is an online software system designed for the CMS experiment at CERN. Its purpose is to provide a framework to set up, test, operate and monitor the trigger components on one hand and to manage their interplay and the information exchange with the run control part of the data acquisition system on the other. The Trigger Supervisor is conceived to provide a simple and homogeneous client interface to the online software infrastructure of the trigger subsystems. The functional and nonfunctional requirements, the design, the operational details, and the components needed in order to facilitate a smooth integration of the trigger software in the context of CMS are described.  相似文献   

5.
The Experiment Data Acquisition and Analysis System (EDAS) of GSI, designed to support the data processing associated with nuclear physics experiments, provides three modes of operation: real-time, interactive replay and batch replay. The real-time mode is used for data acquisition and data analysis during an experiment performed at the heavy ion accelerator at GSI. The computing resources provided by dedicated Experiment Computers are insufficient for the more complex experiments performed today. To meet demands for higher data rates, more computing power and more support for data analysis during an experiment, a distributed system was conceived. The GSI High Speed Data Acquisition Network provides the hardware support for the system; it consists of a number of dedicated Experiment Computers which directly control experiments, a multi-user mainframe which performs the data analysis and data storing for all experiments, and a concentrator to connect all Experiment Computers to the mainframe. Three software sub-systems were designed, one for each component of the distributed system. An experiment may be performed either in Stand Alone Mode, using only the Experiment Computers, or in Extended Mode using all computing resources available. The Extended Mode combines the advantages of the real-time response of a dedicated minicomputer with the availability of computing resources in a large computing environment. This paper first gives an overview of EDAS and presents the GSI High Speed Data Acquisition Network. Data Acquisition Modes and the Extended Mode are then introduced.  相似文献   

6.
The trend in the use of structural analysis programs is towards larger, more integrated user-orientated systems. The main aim of organizing structural analysis systems of programs appears to be to provide users with more versatile tools to reduce the required man-days and computing costs and to increase the scope and potential of the programs.A large modular system is described constituting the most widely used structural analysis packages developed by the CEGB. The principal motivation in developing the system is to rationalize and simplify the considerable diversification of applications and techniques currently available. The applications encompass elastic, thermal, dynamic, creep and plastic analyses of arbitrary two-dimensional and three-dimensional structures. The system uses a standardized form of input for all applications and provides user-aids such as mesh generation, error facilities, output facilities (online and graphical) as integrated system modules. Calculational packages at present integrated into the system comprise FLHE-BERSAFE (thermal and elastic, two and three dimensions), TESS (creep and plasticity, two dimensions), STAG21 (visco-elastic, two dimensions). Various element types and a comprehensive selection of different solution techniques based mainly on finite element displacement model can be used in the analyses.All data presented to the system are in the form of files (datalists) stored on 3330 magnetic disks. A comprehensive permanent disk library containing thermal and mechanical properties of the most commonly used materials is available to all users. The system is designed to be ‘open-ended’ in the sense that new subsystems can be augmented easily. It is envisaged that these additions will include structural analysis applications which are strictly outside the area of finite element analysis, e.g. frameworks.The internal structure and mode of operation of the system are described in detail. Several diverse examples of nuclear design problems which have been studied are given. An examination and comparison with other large structural analysis systems currently available are given, together with a discussion of possible future developments in this field.  相似文献   

7.
为适应BESIII实验数据的积累和资源需求的大幅增长,论文从性能评估、软件部署、用户使用、资源调度、数据管理等五个关键问题对BESIII数据处理与分析利用云计算资源展开研究,并通过BESIII分布式计算系统对云计算资源的整合实现了BESIII作业在云资源上的弹性调度和成功运行,这将为今后大规模应用奠定基础。  相似文献   

8.
设计并实现了一个网格环境下满足大批量作业提交管理需要的图形系统,可以很好地克服命令行方式的困难,实现大批量作业的提交、管理和监控,该系统可以很好地应用于本地机群之上,并且易于扩展并应用到其他网格应用领域.  相似文献   

9.
整合全球范围多单位的资源,如计算资源、存储资源、网络资源,以及不同地域具有各种专长的人力资源和昂贵的智能设备。网格就是针对当今科学研究的这些特点提供支持的计算技术,因而将会成为未来科学的一种基础设施。作为一种全球范围内的计算技术,需要解决许多关键的技术。将介绍网格的体系结构、安全基础设施,以及设计的网格信息管理与监视系统等。  相似文献   

10.
网格安全是网格计算的核心问题之一,而网格安全中的授权问题是当前研究的一个热点,针对网格环境下的授权讨论了当前的通用系统GSI(Grid Security Infrastructure)的不足,分析了最新的几种网格授权机制的体系、策略描述、引擎特点及其相关应用,并对它们各自的特性进行了总结.  相似文献   

11.
The MDSplus data system has been in operation on several fusion machines since 1991 and it is currently in use at over 30 sites spread over 5 continents. A consequence is the extensive feedback provided by the MDSplus user community for bug fixes and improvements and therefore the evolution of MDSplus is keeping pace with the evolution in data acquisition and management techniques. In particular, the recent evolution of MDSplus has been driven by the change in the paradigm for data acquisition in long lasting plasma discharges, where a sustained data stream is transferred from the acquisition devices into the database. Several new features are currently available or are being implemented in MDSplus. The features already implemented include a comprehensive Object-Oriented interface to the system, the python support for data acquisition devices and the full integration in EPICS. Work is in progress for the integration of multiple protocols and security systems in remote data access, a new high level data view layer and a new version of the jScope tool for online visualization and the optimized visualization of very large signals.  相似文献   

12.
A new data access system, H1DS, has been developed and deployed for the H-1 Heliac at the Australian Plasma Fusion Research Facility. The data system provides access to fusion data via a RESTful web service. With the URL acting as the API to the data system, H1DS provides a scalable and extensible framework which is intuitive to new users, and allows access from any internet connected device. The H1DS framework, originally designed to work with MDSplus, has a modular design which can be extended to provide access to alternative data storage systems.  相似文献   

13.
采用代码生成技术可大幅提高软件开发的质量和生产率,降低软件开发风险。目前已有代码生成器多是基于UML模型驱动技术,不能很好适应核电数值计算软件的开发需求。本文针对科学计算类程序的设计特点,开发了基于C#的代码生成器FCG。FCG可根据输入元数据自动生成Module变量定义Fortran代码,并根据元数据自动生成动态变量的内存分配接口和数据访问接口,方便程序直接调用。目前,FCG已应用于堆芯设计和系统分析一体化平台(COSINE)软件的开发过程,实践证明,FCG可极大提高核电软件的开发效率,同时降低软件开发的缺陷率。  相似文献   

14.
The 3081/E is a second generation emulator of a mainframe IBM. One of it's applications will be to form part of the data acquisition system of the upgraded Mark II detector for data taking at the SLAC linear collider. Since the processor does not have direct connections to I/O devices a FASTBUS interface will be provided to allow communication with both SLAC Scanner Processors (which are responsible for the accumulation of data at a crate level) and the experiment's VAX 8600 mainframe. The 3081/E's will supply a significant amount of on-line computing power to the experiment (a single 3081/E is equivalent to 4-5 VAX 11/780'S). A major advantage of the 3081/E is that program development can be done on an IBM mainframe (such as the one used for off-line analysis) which gives the programmer access to a full range of debugging tools. The processor's performance can be continually monitored by comparison of the results obtained using it to those given when the same program is run on an IBM computer.  相似文献   

15.
为适应江门中微子实验电子学系统大量数据传输的需求,对将Huffman编码用于电子学的数据处理进行了研究。通过差值计算结合Huffman编码的方法,实现对大量数据的实时处理并进行压缩。大量数据仿真结果显示,该算法在保留原信息的同时压缩率可以达到25%;算法稳定性好,针对不同的实验数据均能保持较为稳定的压缩率。为江门中微子实验中的数据压缩提供了一种参考方案。  相似文献   

16.
On the Compact Muon Solenoid (CMS) experiment for large hadron collider (LHC) at CERN laboratory, the drift tube chambers are responsible for muon detection and precise momentum measurement. In this paper the first level of the read out electronics for these drift tube chambers is described. These drift tube chambers will be located inside the muon barrel detector in the so-called minicrates (MCs), attached to the chambers. The read out boards (ROBs) are the main component of this first level data acquisition system, and they are responsible for the time digitalization related to Level 1 Accept (L1A) trigger of the incoming signals from the front-end electronics, followed by a consequent data merging to the next stages of the data acquisition system. ROBs' architecture and functionality have been exhaustively tested, as well as their capability of operation beyond the expected environmental conditions inside the CMS detector. Due to the satisfactory results obtained, final production of ROBs and their assembly in the MCs has already started. A total amount of 250 MCs and approximately 1500 ROBs are being produced and tested thoroughly at CIEMAT (Spain). One set of tests, the burn-in tests, will guarantee ten years of limited maintenance operation. An overview of the system and a summary of the different results of the tests performed on ROBs and MCs will be presented. They include acceptance tests for the production chain as well as several validation tests that insure proper operation of the ROBs beyond the CMS detector conditions.  相似文献   

17.
高能物理与网格计算   总被引:6,自引:0,他引:6  
概略介绍了高能物理实验计算环境的最新发展情况.包括近一些年来国际高能物理实验发展和对计算需求的增长变化;以正在建造中的LHC上的实验为例介绍了为满足这些需求而制订的计算环境规划;网格计算技术诞生的历史及国际高能物理网格技术研究的现状;高能物理网格所需要的网络基础设施的发展情况等。最后介绍了正在起步中的我国高能物理网格技术研究。  相似文献   

18.
It is necessary to develop PSA methodology and integrated accident management technology during low power/shutdown operations. To develop this technology, thermal-hydraulic analysis is necessarily required to access the trend of plant process parameters and operator's grace time after initiation of the accident. In this study, the thermal-hydraulic behavior in the loss of shutdown cooling system accident during low power/shutdown operations at the Korean standard nuclear power plant was analyzed using the best-estimate thermal-hydraulic analysis code, MARS2.1. The effects of operator's action and initiation of accident mitigation system, such as safety injection and gravity feed on mitigation of the accident during shutdown operations are also analyzed.When steam generators are unavailable or vent paths with large cross-sectional area are open in the accident, the core damage occurs earlier than the cases of steam generators available or vent paths with small cross-sectional area. If an operator takes an action to mitigate the accident, the accident can be mitigated considerably. To mitigate the accident, high-pressure safety injection is more effective in POS4B and gravity feed is more effective in POS5. The results of this study can contribute to the plant safety improvement because those can provide the time for an operator to take an action to mitigate the accident by providing quantitative time of core damage. The results of this study can also provide information in developing operating procedure and accident management technology.  相似文献   

19.
To secure reliability of the seismic design of the reactor vessel internals (RVIs) through the finite element analysis, it is important to develop the accurate analysis model which can represent the geometric complexity of the RVIs. However, the seismic analysis requires too large computation cost to solve the complex equations; thus, it needs to reduce the overall size of the analysis model. Here, we apply a model reduction method based on the fixed-interface component mode synthesis (CMS) method to practical RVIs to solve complex numerical problems efficiently. To verify the model reduction method, several cases of the RVIs with different conditions are analyzed for the static and dynamic problems. Finally, the seismic analysis was performed with the suggested reduced model with the CMS method. The time history analysis is performed to extract important seismic responses at the specified locations, and the stress analysis is also performed to identify that the RVIs satisfy the seismic design. In the last part of the paper, an example of the design modification is suggested to reduce the stress intensity at the support locations.  相似文献   

20.
The BaBar experiment is characterized by extremely high luminosity and very large volume of data produced and stored, with increasing computing requirements each year. To fulfill these requirements a control system has been designed and developed for the offline distributed data reconstruction system. The control system described in this paper provides the performance and flexibility needed to manage a large number of small computing farms, and takes full benefit of object oriented (OO) design. The infrastructure is well isolated from the processing layer, it is generic and flexible, based on a light framework providing message passing and cooperative multitasking. The system is distributed in a hierarchical way: the top-level system is organized in farms, farms in services, and services in subservices or code modules. It provides a powerful finite state machine framework to describe custom processing models in a simple regular language. This paper describes the design and evolution of this control system, currently in use at SLAC and Padova on$sim$450 CPUs organized in nine farms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号