首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
《Ergonomics》2012,55(7):747-759
Carpal tunnel syndrome (CTS) remains one of the most commonly reported and studied work related musculoskeletal disorders. Categorical representations of exposures has been critical in identifying associations between risk factors and CTS, however, quantification of exposure—response relationships require using continuous exposure data. Also, few interactions between risk factors, especially between risk factor categories, have been investigated. The objectives of this study were to investigate the utility of using continuous exposure data and to identify interaction effects of risk factors, both within and between risk factor categories, for predicting CTS. A cross sectional study was performed at a fish processing facility in which 53 participants were evaluated during normal task performance. Due to task asymmetry, each hand was considered separately, providing 106 hands for analysis. Direct measurement and a questionnaire were used to quantify exposures to common occupational and personal risk factors. Stepwise logistic regression analysis was performed to identify three models for predicting CTS and assess predictive ability using: occupational risk factors only (three-way interactions considered), personal risk factors only (two-way interactions considered), and a mixed model considering two-way interactions across risk factor categories and previously identified significant interactions. Models including only occupational or personal risk factors were moderately accurate overall (73% and 77% respectively), but were not sensitive in differentiating between CTS cases and non-cases (39% and 33% respectively). The mixed model was found to be accurate (88%) and sensitive (78%), though only one interaction effect was included. The results of this study illustrate the importance of using continuous exposure data, especially in job tasks where exposures to occupational risk factors is similar, when differentiating between high and low risk job tasks.  相似文献   

2.
Carpal tunnel syndrome (CTS) remains one of the most commonly reported and studied work related musculoskeletal disorders. Categorical representations of exposures has been critical in identifying associations between risk factors and CTS, however, quantification of exposure-response relationships require using continuous exposure data. Also, few interactions between risk factors, especially between risk factor categories, have been investigated. The objectives of this study were to investigate the utility of using continuous exposure data and to identify interaction effects of risk factors, both within and between risk factor categories, for predicting CTS. A cross sectional study was performed at a fish processing facility in which 53 participants were evaluated during normal task performance. Due to task asymmetry, each hand was considered separately, providing 106 hands for analysis. Direct measurement and a questionnaire were used to quantify exposures to common occupational and personal risk factors. Stepwise logistic regression analysis was performed to identify three models for predicting CTS and assess predictive ability using: occupational risk factors only (three-way interactions considered), personal risk factors only (two-way interactions considered), and a mixed model considering two-way interactions across risk factor categories and previously identified significant interactions. Models including only occupational or personal risk factors were moderately accurate overall (73% and 77% respectively), but were not sensitive in differentiating between CTS cases and non-cases (39% and 33% respectively). The mixed model was found to be accurate (88%) and sensitive (78%), though only one interaction effect was included. The results of this study illustrate the importance of using continuous exposure data, especially in job tasks where exposures to occupational risk factors is similar, when differentiating between high and low risk job tasks.  相似文献   

3.

This article introduces a formally specified design of a compositional generic agent model (GAM). This agent model abstracts from specific application domains; it provides a unified formal definition of a model for weak agenthood. It can be (re) used as a template or pattern for a large variety of agent types and application-domain types. The model was designed on the basis of experiences in a number of application domains. The compositional development method DESIRE was used to design the agent model GAM at a conceptual and logical level. It serves as a unified, precisely defined, conceptual structure, which can be refined by specialization and instantiation to a large variety of other, more specific agents. To illustrate reuse of this agent model, specialization and instantiation to model cooperative information gathering agents is described in depth. Moreover, it is shown how GAM can be used to describe in a unified and, hence, more comparable manner a large number of agent architectures from the literature.  相似文献   

4.
《Graphical Models》2005,67(4):347-369
This paper presents DigitalSculpture, an interactive sculpting framework founded upon iso-surfaces extracted from recursively subdivided, 3D irregular grids. Our unique implicit surface model arises from an interpolatory, volumetric subdivision scheme that is C1 continuous across the domains defined by arbitrary 3D irregular grids. We assign scalar coefficients and color to each control vertex and allow these quantities to participate in the volumetric subdivision of irregular grids. In the subdivision limit, a virtual sculpture is obtained by extracting the zero-level from the volumetric, scalar field defined over the irregular grid. This novel shape geometry extends concepts from solid modeling, recursive subdivision, and implicit surfaces; facilitates many techniques for interactive sculpting; permits rapid, local evaluation of iso-surfaces; and affords level-of-detail control of the sculpted surfaces.  相似文献   

5.
This paper deals with the problem of spatially distributed causal model-based diagnosis on interacting behavioral petri nets (BPNs). The system to be diagnosed comprises different interacting subsystems (each modeled as a BPN) and the diagnostic system is defined as a multi-agent system where each agent is designed to diagnose a particular subsystem on the basis of its local model, the local received observation and the information exchanged with the neighboring agents. The interactions between subsystems are captured by tokens that may pass from one net model to another via bordered places. The diagnostic reasoning scheme is accomplished locally within each agent by exploiting classical analysis techniques of Petri nets like reachability graph and invariant analysis. Once local diagnoses are obtained, agents begin to communicate to ensure that such diagnoses are consistent and recover completely the results that would be obtained by a centralized agent having a global view about the whole system. The paper concludes with an empirical comparison, in terms of the running time, of two implementations of Petri net analysis techniques used as a distributed diagnostic reasoning schemes.  相似文献   

6.
Peculiarity-oriented mining is a data mining method consisting of peculiar data identification and peculiar data analysis. Peculiarity factor and local peculiarity factor are important concepts employed to describe the peculiarity of a data point in the identification step. One can study the notions at both attribute and record levels. In this paper, a new record LPF called distance-based record LPF (D-record LPF) is proposed, which is defined as the sum of distances between a point and its nearest neighbors. The authors prove that D-record LPF can characterize the probability density of a continuous m-dimensional distribution accurately. This provides a theoretical basis for some existing distance-based anomaly detection techniques. More importantly, it also provides an effective method for describing the class-conditional probabilities in a Bayesian classifier. The result enables us to apply D-record LPF to solve classification problems. A novel algorithm called LPF-Bayes classifier and its kernelized implementation are proposed, which have some connection to the Bayesian classifier. Experimental results on several benchmark datasets demonstrate that the proposed classifiers are competitive to some excellent classifiers such as AdaBoost, support vector machines and kernel Fisher discriminant.  相似文献   

7.
多值指数关联联想记忆模型(MMECAM)是一种高存储容量的自联想记忆神经网络。在详细分析其优缺点的基础上,通过改进MMECAM模型的更新规则,首先提出一个新的高斯自联想记忆模型(GAM),然后通过定义简单的能量函数从理论上证明其在同、异步方式下的稳定性,从而保证所存储的模式能最终成为GAM的稳定点;其次,通过引入一般相似性测度进一步提出广义GAM模型(G-GAMs)框架,使得GAM模型成为其特例;最后,将GAM模型应用于单样本图像识别,计算机模拟证实了该模型的鲁棒性能。  相似文献   

8.
As the complexity of an unmanned vehicle’s operational environment increases so does the need to consider the obstacle space continually, and this is aided by splitting the motion planning functionality into distinct global and local layers. This paper presents a new continuous local motion planning framework, where the output and control space elements of the traditional receding horizon control problem are separated into distinct layers. This separation reduces the complexity of the local motion trajectory optimisation, enabling faster design and increased horizon length. The focus of this paper is on the output space component of this framework. Bezier polynomial functions are used to describe local motion trajectories which are constrained to vehicle performance limits and optimised to track a global trajectory. Development and testing is in simulation, targeted at a nonlinear model of a quadrotor unmanned air vehicle. The defined framework is used to provide situation-aware tracking of a global trajectory in the presence of static and dynamic obstacles, as well as realistic turbulence and gusts. Also demonstrated is the immediate-term decentralised deconfliction of multiple unmanned vehicles, and multiple formations of unmanned vehicles.  相似文献   

9.
A non-invasive approach for capture and playback (C&P) can be a very useful tool for testing applications endowed of a graphic user interface in local and/or distributed environments, and in general for testing applications without modifying their run-time environment. In the software lifecycle, the phases ofC&P are performed after the application design. Since these are close to the delivery deadline, the time needed for application testing is considered as a high, and frequently unacceptable, cost. In this paper, a new approach for non-invasiveC&P testing techniques is proposed. This is strongly based on the object-oriented paradigm at both hardware and software levels. In particular, a new board for image grabbing and pattern matching, and a new object-oriented language for specifying the tests have been defined. The main goals of this new approach are (i) the reduction of testing time by supporting the reuse of tests (coded by using a specific language) at each level of abstraction, and (ii) the anticipation of the capture-phase of testing with the system design.  相似文献   

10.
Formal specification and verification techniques have been used successfully to detect feature interactions. We investigate whether feature-based specifications can be used for this task. Feature-based specifications are a special class of specifications that aim at modularity in open-world, feature-oriented systems. The question we address is whether modularity of specifications impairs the ability to detect feature interactions, which cut across feature boundaries. In an exploratory study on 10 feature-oriented systems, we found that the majority of feature interactions could be detected based on feature-based specifications, but some specifications have not been modularized properly and require undesirable workarounds to modularization. Based on the study, we discuss the merits and limitations of feature-based specifications, as well as open issues and perspectives. A goal that underlies our work is to raise awareness of the importance and challenges of feature-based specification.  相似文献   

11.
12.
Object-oriented metrics aim to exhibit the quality of source code and give insight to it quantitatively. Each metric assesses the code from a different aspect. There is a relationship between the quality level and the risk level of source code. The objective of this paper is to empirically examine whether or not there are effective threshold values for source code metrics. It is targeted to derive generalized thresholds that can be used in different software systems. The relationship between metric thresholds and fault-proneness was investigated empirically in this study by using ten open-source software systems. Three types of fault-proneness were defined for the software modules: non-fault-prone, more-than-one-fault-prone, and more-than-three-fault-prone. Two independent case studies were carried out to derive two different threshold values. A single set was created by merging ten datasets and was used as training data by the model. The learner model was created using logistic regression and the Bender method. Results revealed that some metrics have threshold effects. Seven metrics gave satisfactory results in the first case study. In the second case study, eleven metrics gave satisfactory results. This study makes contributions primarily for use by software developers and testers. Software developers can see classes or modules that require revising; this, consequently, contributes to an increment in quality for these modules and a decrement in their risk level. Testers can identify modules that need more testing effort and can prioritize modules according to their risk levels.  相似文献   

13.
The performance improvements that can be achieved by classifier selection and by integrating terrain attributes into land cover classification are investigated in the context of rock glacier detection. While exposed glacier ice can easily be mapped from multispectral remote-sensing data, the detection of rock glaciers and debris-covered glaciers is a challenge for multispectral remote sensing. Motivated by the successful use of digital terrain analysis in rock glacier distribution models, the predictive performance of a combination of terrain attributes derived from SRTM (Shuttle Radar Topography Mission) digital elevation models and Landsat ETM+ data for detecting rock glaciers in the San Juan Mountains, Colorado, USA, is assessed. Eleven statistical and machine-learning techniques are compared in a benchmarking exercise, including logistic regression, generalized additive models (GAM), linear discriminant techniques, the support vector machine, and bootstrap-aggregated tree-based classifiers such as random forests. Penalized linear discriminant analysis (PLDA) yields mapping results that are significantly better than all other classifiers, achieving a median false-positive rate (mFPR, estimated by cross-validation) of 8.2% at a sensitivity of 70%, i.e. when 70% of all true rock glacier points are detected. The GAM and standard linear discriminant analysis were second best (mFPR: 8.8%), followed by polyclass. For comparison, the predictive performance of the best three techniques is also evaluated using (1) only terrain attributes as predictors (mFPR: 13.1-14.5% for best three techniques), and (2), only Landsat ETM+ data (mFPR: 19.4-22.7%), yielding significantly higher mFPR estimates at a 70% sensitivity. The mFPR of the worst three classifiers was by about one-quarter higher compared to the best three classifiers, and the combination of terrain attributes and multispectral data reduced the mFPR by more than one-half compared to remote sensing only. These results highlight the importance of combining remote-sensing and terrain data for mapping rock glaciers and other debris-covered ice and choosing the optimal classifier based on unbiased error estimators. The proposed benchmarking methodology is more generally suitable for comparing the utility of remote-sensing algorithms and sensors.  相似文献   

14.
基于结构高阶局部模态的损伤诊断研究   总被引:2,自引:0,他引:2  
结构振动测试和损伤诊断中,较易得到结构的低阶模态信息,但低阶模态信息主要反映结构的整体性能,对结构局部损伤不敏感.本文主要研究框架结构高阶模态特性,并通过高阶模态来反映结构的局部特征,实现框架结构损伤诊断.研究中采用理论模态分析和实验模态分析相结合的方法.理论模态分析表明框架结构存在模态密集区且高阶模态具有局部特征.采用局部激振方法对一个钢筋混凝土框架结构模型施加激励,通过实验模态分析获取高阶局部模态信息.结果表明最大能量高阶模态可以识别框架柱的刚度变化.  相似文献   

15.
This paper deals with lateral-torsional buckling of beams which have already buckled locally before the occurrence of overall buckling. Due to the weakening effects of local buckling, the stiffness of the beam is reduced. As a result, overall lateral buckling takes place at a lower load than the member would carry in the absence of local buckling. The effective width concept is used in this investigation to account for the post-buckling strength in the buckled compression plate elements of the beam section. A finite element formulation in conjunction with effective width concept is presented. Due to the nonlinearity involved because of local buckling, an iterative procedure is necessary. Search techniques are used to find the load factor. The method combined with an analysis on nonlinear bending moment distribution can be used to analyze the lateral stability problem of locally buckled continuous structure. In this case, both elastic stiffness matrix and geometric stiffness matrix must be revised at each load level. A computer program has been prepared for an IBM 370/165 computer.  相似文献   

16.
In this paper, we describe how high spatial resolution (10 m) multisensor remote sensing techniques can be used to study the surface roughness characteristics of large scale frontal boundaries (in this case associated with the Rhine Plume). The instrumentation employed in the research consisted of a Daedalus AADS 1268 Airborne Thematic Mapper (ATM) operated by the UK National Environment Council, the HELISCAT helicopter-borne multifrequency microwave scatterometer of the University of Hamburg, and research vessels (R.V.s) from the University of Wales and the Dutch Rijkswaterstaat. The data we present were gathered on 24 April 1991 when calm wind conditions developed within the test area. A sequence of thermal infrared images gathered by the ATM provides a record of the motion of a frontal boundary through this experimental region which is then used to identify the frontal signature in the HELISCAT data. ATM sunglint images show that the front is characterized by a zone of reduced surface roughness, some 75m in width, which is detected on the 'upstream' side of the front (as defined relative to the tidal flow direction), where surface current convergence can be expected. Radar backscatter levels at X and C bands are reduced by 10 dB in this region but with increase in radar wavelength, the signature weakens and is rarely detected at L band. On crossing the front in the downstream direction, radar backscatter levels are rapidly restored. The available evidence indicates that the reduced backscatter signature is caused by a surface slick which is formed at the frontal interface rather than by short gravity wave damping from shear in local surface currents.  相似文献   

17.
加移动因子的C-V模型   总被引:2,自引:0,他引:2       下载免费PDF全文
在变分水平集方法中,C-V模型的优点之一是能够提取以非梯度形式定义的图像边界,然而,在提取该类型边界时,模型仅考虑了图像各区域的均值信息而没有考虑图像的局部信息,因此尽管C-V模型能够得到渐进型边界图像的分割结果,但是存在分割误差。将移动因子引入到C-V模型以解决上述问题。其中移动因子定义为图像局部凸凹性的函数,通过该因子可以调整模型0-水平面的高度,进而使得解平面与目标所在平面更加接近或重合,以达到消除分割误差的目的。文中给出了偏微分形式的模型,并通过实验验证了模型的有效性。  相似文献   

18.
提出了一种基于交叉耦合的“新虫口模型”的图像加密算法。利用这种交叉耦合混沌系统同时生成两组混沌序列,并用其中一组来置乱图像,另外一组改变像素灰度值进行加密。这种使用单一耦合混沌系统同时置乱和加密的算法,有别于目前用一个混沌系统置乱、用另一个混沌系统加密图像的主流算法,提出了一种改进的混沌加密图像思路。仿真结果和安全分析表明该算法扩大了密钥空间,具有较高的安全性能和较低的时间复杂度。  相似文献   

19.
侯东亮  李铁克 《计算机应用》2012,32(12):3553-3557
针对转炉出钢延迟的炼钢连铸重调度问题,以开工时间、加工时间以及加工机器的差异度和同一炉次相邻设备间的等待时间的差异化最小为目标建立了动态约束满足模型,提出了基于约束满足和断浇修复的重调度算法。算法通过变量和值选择规则依次对变量赋值,利用冲突识别与解消规则识别赋值过程中产生的冲突并予以解消冲突;在形成的准可行调度中,利用断浇修复启发式规则修复连铸机的断浇现象。仿真实验模拟了3组均匀分布随机产生的延迟时间量,所得目标值分别为0.15,0.28和0.51。结果表明延迟时间量的大小对目标函数值有一定影响,所提算法能够最大限度地满足生产的实时性和稳定性的需求。  相似文献   

20.
Research has failed to establish a conclusive link between levels of user involvement and information system project success. Communication and control theories indicate that the quality of interactions between users and inofrmation personnel may serve to better the coordinaton in a project and lead to greater success. A model is developed that directly relates management control to the quality of interaction and project success, with interaction quality as a potential intermediary. These variables provide a more distinct relationship to success as interactions are more structurally defined and controlled. A survey of 196 IS professionals provides evidence that management control techniques improve the quality of user–IS personnel interactions and eventual project success. These formal structures provide guidelines for managers in controlling the critical relations between users and IS personnel.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号