首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
We have developed a method of collecting and arranging clinical data that makes the medical record more useful in patient care and research. The design is based on two principles: that detailed clinical findings should be recorded independently of any diagnostic interpretation, and that time should be integrated as a dimension of the medical record. Analysis of the principal components of the medical record as we have organized it allows identification of clinical entities on the basis of synchronous or sequential features and facilitates precise tracking of symptoms, evaluation of therapeutic effects, comparison of treatments, identification of patients at risk of recurrence, transmission of observations from physician to physician, and analysis and reinterpretation of the observations recorded.  相似文献   

2.
一个牙科电子病历系统的设计与实现   总被引:6,自引:1,他引:6  
黄穗  刘剑 《计算机工程》2004,30(4):167-169
通过一个实用牙科电子病历系统的设计与开发过程,分析了牙科临床诊疗、科研和管理工作对电子病历的需求,建立了一套较为实用的规范的病历模板。系统采用Ms SQL Server数据库将病历模板及相关的病人信息数据统一存储,通过Delphi编程实现流水作业方式,具有快捷简明的操作界面、图文并茂的数据显示和详尽的图文报表打印功能。  相似文献   

3.
We explore in this paper a novel sampling algorithm, referred to as algorithm PAS (standing for proportion approximation sampling), to generate a high-quality online sample with the desired sample rate. The sampling quality refers to the consistency between the population proportion and the sample proportion of each categorical value in the database. Note that the state-of-the-art sampling algorithm to preserve the sampling quality has to examine the population proportion of each categorical value in a pilot sample a priori and is thus not applicable to incremental mining applications. To remedy this, algorithm PAS adaptively determines the inclusion probability of each incoming tuple in such a way that the sampling quality can be sequential/preserved while also guaranteeing the sample rate close to the user specified one. Importantly, PAS not only guarantees the proportion consistency of each categorical value but also excellently preserves the proportion consistency of multivariate statistics, which will be significantly beneficial to various data mining applications. For better execution efficiency, we further devise an algorithm, called algorithm EQAS (standing for efficient quality-aware sampling), which integrates PAS and random sampling to provide the flexibility of striking a compromise between the sampling quality and the sampling efficiency. As validated in experimental results on real and synthetic data, algorithm PAS can stably provide high-quality samples with corresponding computational overhead, whereas algorithm EQAS can flexibly generate samples with the desired balance between sampling quality and sampling efficiency  相似文献   

4.
基于XML的结构化电子病历系统设计   总被引:1,自引:0,他引:1  
阐述了两种结构化数据采集的方法:自然语言处理(NLP)和结构化数据输入(SDE).基于上述两种方法,设计了一个结构化电子病历系统(EPR)的原型,用XML技术来描述和实现知识库,使用Microsoft.NET的XML数据流技术实现病历数据的输入、存储和展现,并使用XSL技术实现Web方式的病历浏览.设计表明,利用XML技术解决病历的自由输入并和结构化输入相结合,是可行的技术方案;它为电子病历的研究提供了新的途径.  相似文献   

5.
Search, retrieval and storage of video content over the Internet and online repositories can be efficiently improved using compact summarizations of this content. Robust and perceptual fingerprinting codes, extracted from local video features, are astutely used for identification and authentication purposes. Unlike existing fingerprinting schemes, this paper proposes a robust and perceptual fingerprinting solution that serves both video content identification and authentication. While content identification is served by the robustness of the proposed fingerprinting codes to content alterations and geometric attacks, their sensitivity to malicious attacks makes them fit for forgery detection and authentication. This dual usage is facilitated by a new concept of sequence normalization based on the circular shift properties of the discrete cosine and sine transforms (DCT and DST). Sequences of local features are normalized by estimating the circular shift required to align each of these sequences to a reference sequence. The fingerprinting codes, consisting of normalizing shifts, are properly modeled using information-theoretic concepts. Security, robustness and sensitivity analysis of the proposed scheme is provided in terms of the security of the secret keys used during the proposed normalization stage. The computational efficiency of the proposed scheme makes it appropriate for large scale and online deployment. Finally, the robustness (identification-based) and sensitivity (authentication-based) of the proposed fingerprinting codes to content alterations and geometric attacks is evaluated over a large set of video sequences where they outperform existing DCT-based codes in terms of robustness, discriminability and sensitivity to moderate and large size intentional alterations.  相似文献   

6.
The segmentation of brain tumor plays an important role in diagnosis, treatment planning, and surgical simulation. The precise segmentation of brain tumor can help clinicians obtain its location, size, and shape information. We propose a fully automatic brain tumor segmentation method based on kernel sparse coding. It is validated with 3D multiple-modality magnetic resonance imaging (MRI). In this method, MRI images are pre-processed first to reduce the noise, and then kernel dictionary learning is used to extract the nonlinear features to construct five adaptive dictionaries for healthy tissues, necrosis, edema, non-enhancing tumor, and enhancing tumor tissues. Sparse coding is performed on the feature vectors extracted from the original MRI images, which are a patch of m×m×m around the voxel. A kernel-clustering algorithm based on dictionary learning is developed to code the voxels. In the end, morphological filtering is used to fill in the area among multiple connected components to improve the segmentation quality. To assess the segmentation performance, the segmentation results are uploaded to the online evaluation system where the evaluation metrics dice score, positive predictive value (PPV), sensitivity, and kappa are used. The results demonstrate that the proposed method has good performance on the complete tumor region (dice: 0.83; PPV: 0.84; sensitivity: 0.82), while slightly worse performance on the tumor core (dice: 0.69; PPV: 0.76; sensitivity: 0.80) and enhancing tumor (dice: 0.58; PPV: 0.60; sensitivity: 0.65). It is competitive to the other groups in the brain tumor segmentation challenge. Therefore, it is a potential method in differentiation of healthy and pathological tissues.  相似文献   

7.
The medical record is maintained for the express purpose of enabling the physician to deliver better care to the patient. But its usefulness in achieving that purpose has often been hampered by its inflation with voluminous, unorganized notes, and test results. The medical profession has begun to recognize this, and work has recently been done in reviewing the structure of the medical record with hopes of making improvements.1 Based on what has been learned so far, it appears that the most effective record is one containing a broad data base including many investigative procedures, while at the same time focusing the physician's attention on problems or diagnoses derived from the data base.2 These problems express only the essential information derived from the data base–that information which forms the basis for the treatment regimen.  相似文献   

8.
The objective is to develop a non-invasive automatic method for detection of epileptic seizures with motor manifestations. Ten healthy subjects who simulated seizures and one patient participated in the study. Surface electromyography (sEMG) and motion sensor features were extracted as energy measures of reconstructed sub-bands from the discrete wavelet transformation (DWT) and the wavelet packet transformation (WPT). Based on the extracted features all data segments were classified using a support vector machine (SVM) algorithm as simulated seizure or normal activity. A case study of the seizure from the patient showed that the simulated seizures were visually similar to the epileptic one. The multi-modal intelligent seizure acquisition (MISA) system showed high sensitivity, short detection latency and low false detection rate. The results showed superiority of the multi-modal detection system compared to the uni-modal one. The presented system has a promising potential for seizure detection based on multi-modal data.  相似文献   

9.
It is important for a PACS to have access to the patient data, as well as to the images themselves, for the purpose of sophisticated image archiving, retrieving, viewing and interpretation. There are many kinds of patient data concerning image examinations (i.e., patient name, ID, age, examination date and time, examined regions, methods, findings on images, diagnoses or diagnostic impressions, etc.). Some of them are acquired from image examination apparatus, some are supplied by diagnostic radiologists, while some need be retrieved from the radiology and hospital information systems. To facilitate this data exchange, a PACS-RIS-HIS coupling is required. The author has constructed at Tokyo University Hospital a small PACS called TRACS, which adopts one of the possible PACS-RIS-HIS coupling configurations.  相似文献   

10.
This paper revisits a problem that was identified by Kramer and Magee: placing a system in a consistent state before and after runtime changes. We show that their notion of quiescence as a necessary and sufficient condition for safe runtime changes is too strict and results in a significant disruption in the application being updated. In this paper, we introduce a weaker condition: tranquillity. We show that tranquillity is easier to obtain and less disruptive for the running application but still a sufficient condition to ensure application consistency. We present an implementation of our approach on a component middleware platform and experimentally verify the validity and practical applicability of our approach using data retrieved from a case study.  相似文献   

11.
崔德友 《计算机仿真》2012,29(3):303-306
研究纸币识别问题,提高纸币识别的准确率。针对纸币识别过程中,当待识别的纸币在流通中存在被污染或者磨损,传统的模板匹配的识别算法受纸币污损的影响识别的准确性。为解决上述问题提出一种高斯模型的识别算法,首先对待检测图像进行亮度补偿、边缘检测、倾斜校正等预处理,然后将图像划分为若干个矩形子区域,计算各子区域的灰度平均值作为提取的图像初始特征,通过计算初始特征的先验概率并对后验概率进行修正,对污损区域特征值的校正,最后建立高斯模型完成纸币的识别,克服了传统无法准确识别污损纸币的问题。实验证明,改进方法能够将纸币污损部分校正并将纸币准确识别,取得了满意的效果。  相似文献   

12.
The symptomatic cure observed in the treatment of Alzheimer's disease (AD) by FDA approved drugs could possibly be due to their specificity against the active site of acetylcholinesterase (AChE) and not by targeting its pathogenicity. The AD pathogenicity involved in AChE protein is mainly due to amyloid beta peptide aggregation, which is triggered specifically by peripheral anionic site (PAS) of AChE. In the present study, a workflow has been developed for the identification and prioritization of potential compounds that could interact not only with the catalytic site but also with the PAS of AChE. To elucidate the essential structural elements of such inhibitors, pharmacophore models were constructed using PHASE, based on a set of fifteen best known AChE inhibitors. All these models on validation were further restricted to the best seven. These were transferred to PHASE database screening platform for screening 89,425 molecules deposited at the “ZINC natural product database”. Novel lead molecules retrieved were subsequently subjected to molecular docking and ADME profiling. A set of 12 compounds were identified with high pharmacophore fit values and good predicted biological activity scores. These compounds not only showed higher affinity for catalytic residues, but also for Trp86 and Trp286, which are important, at PAS of AChE. The knowledge gained from this study, could lead to the discovery of potential AChE inhibitors that are highly specific for AD treatment as they are bivalent lead molecules endowed with dual binding ability for both catalytic site and PAS of AChE.  相似文献   

13.
Narrative visualizations combine conventions of communicative and exploratory information visualization to convey an intended story. We demonstrate visualization rhetoric as an analytical framework for understanding how design techniques that prioritize particular interpretations in visualizations that "tell a story" can significantly affect end-user interpretation. We draw a parallel between narrative visualization interpretation and evidence from framing studies in political messaging, decision-making, and literary studies. Devices for understanding the rhetorical nature of narrative information visualizations are presented, informed by the rigorous application of concepts from critical theory, semiotics, journalism, and political theory. We draw attention to how design tactics represent additions or omissions of information at various levels-the data, visual representation, textual annotations, and interactivity-and how visualizations denote and connote phenomena with reference to unstated viewing conventions and codes. Classes of rhetorical techniques identified via a systematic analysis of recent narrative visualizations are presented, and characterized according to their rhetorical contribution to the visualization. We describe how designers and researchers can benefit from the potentially positive aspects of visualization rhetoric in designing engaging, layered narrative visualizations and how our framework can shed light on how a visualization design prioritizes specific interpretations. We identify areas where future inquiry into visualization rhetoric can improve understanding of visualization interpretation.  相似文献   

14.
基于线性预测的图像无损信息隐藏方法   总被引:4,自引:0,他引:4  
对Thodi算法(Thodi D M, Rodrfguez J J. Reversible watermarking by prediction-error expansion [ C] //Proceedings of the 6th IEEE Southwest Symposium on Image Analysis and Interpretation, Lake Tahoe, Nevada, 2004 : 21-25)进行了预测误差算法和嵌入/提取算法2方面的改进.较好地实现了大数据量信息在图像中的隐秘传输,使提取隐藏数据后能无损恢复原图像.该方法已用于将病人信息隐藏在医学图像中,使得藏有信息的图像在传输过程中出现噪声或丢失信息时,仍可较好地从中提取病人信息;亦可用于遥感、军事图像的隐秘传输等领域.  相似文献   

15.
In Norway, a national initiative is currently aiming at standardising the electronic patient record (EPR) content based on an openEHR framework. The openEHR architecture, offers users the capability to conduct standardisation and structuration of the EPR content in a distributed manner, through an internet-based tool. Systems based on this architecture, is expected to ensure universal (also internationally) interoperability among all forms of electronic data. A crude estimate is that it is necessary to define somewhere between 1000 and 2000 standardised elements or clinical concepts (so-called archetypes), to constitute a functioning EPR system. Altogether, the collection of defined archetypes constitutes a backbone of an interoperable EPR system lending on the openEHR architecture. We conceptualize the agreed-upon archetypes as a large-scale information infrastructure, and the process of developing the archetypes as a infrastructuring effort. With this as a backdrop, we focus on the following research question: What are the challenges of infrastructuring in a large-scale user-driven standardisation process in healthcare? This question is operationalized into three sub-questions: First, how are the openEHR-based archetypes standardised in practice? Second, what is the role of daily clinical practice, and existing systems in the process of developing archetypes? Third, how may related, but supposedly independent infrastructuring projects shape each other’s progress? We contribute with insight into how power relations and politics shape the infrastructuring process. Empirically, we have studied the formative process of establishing a national information infrastructure based on the openEHR approach in the period 2012–2016 in Norway.  相似文献   

16.
U盘出现文件和目录乱码的原因大多是病毒破坏或对U盘的不正常插拔。文件或目录乱码的修复最好的方法是用Winhex手工修复。由于系统的自启动Chkdsk操作,和通过FinalData和EasyRecovery进行数据恢复时,只能找回了U盘上的部分数据。  相似文献   

17.
目的 多尺度空间关系一致性评价是多尺度空间数据冲突检测或数据匹配的重要环节,现有方法主要集中于相同或相近比例尺空间关系的相似性计算,对尺度跨度较大、发生维度变化情形的一致性评价考虑较少,且定性的概念距离度量方法难以适用于具有维度差异的多尺度空间数据。针对上述问题,提出一种顾及降维的多尺度空间关系广义一致性度量方法。方法 首先,引入同名对象概念,分析多尺度范畴下同名对象的表现特征。考虑到维度变化对空间关系的影响,结合并扩展已有的空间关系度量方法,分别提出了广义的拓扑关系、方向关系、距离关系相似性度量方法。然后,构建小尺度场景的同名对象邻近关系图,为减少一致性计算代价,依据不同空间关系特点将邻近关系图化简为各类空间关系邻近图。最后,通过依次计算各类空间关系的相似性值和联合相似性值来判断多尺度场景空间关系表达的一致性。结果 通过对1:1万基础地理数据和1:5万派生数据进行空间关系相似性的定量计算分析,并与现有概念距离方法进行比较,验证了本文方法能更精确地度量尺度跨度较大的空间关系一致性。结论 该评价方法具有广泛适用性,可用于辅助地图综合、多尺度空间数据匹配以及多尺度空间数据建库等过程。  相似文献   

18.
Moonen et al. (1989 a), presented an SVD-based identification scheme for computing state-space models for multivariable linear time-invariant systems. In the present paper, this identification procedure is reformulated making use of the quotient singular value decomposition (QSVD). Here the input-output error co-variance matrix can be taken into account explicitly, thus extending the applicability of the identification scheme to the case were the input and output data are corrupted by coloured noise. It turns out that in practice, due to the use of various pre-filtering techniques (anti-aliasing, etc.), this latter case is most often encountered. The extended identification scheme explicitly compensates for the filter characteristics and the consistency of the identification results follows from the consistency results for the QSVD. The usefulness of this generalization is demonstrated. The development is largely inspired by recent progress in total least-squares solution techniques (Van Huffel 1989) for the identification of static linear relations. The present identification scheme can therefore be viewed as the analogous counterpart for identifying dynamic linear relations.  相似文献   

19.
在对RTF格式分析的基础上,结合Visual C++控件和文件结构化设计了新型的控件,可实现对含有OLE对象的R1F文档读写及存储,并通过包对象数据类型实现了对含有RTF文件内容的Access数据库访问,完成了RTF文档与ACCESS数据库较好的结合。  相似文献   

20.
Hyperglycaemia in critically ill patients increases the risk of further complications and mortality. This paper introduces a model capable of capturing the essential glucose and insulin kinetics in patients from retrospective data gathered in an intensive care unit (ICU). The model uses two time-varying patient specific parameters for glucose effectiveness and insulin sensitivity. The model is mathematically reformulated in terms of integrals to enable a novel method for identification of patient specific parameters. The method was tested on long-term blood glucose recordings from 17 ICU patients, producing 4% average error, which is within the sensor error. One-hour forward predictions of blood glucose data proved acceptable with an error of 2-11%. All identified parameter values were within reported physiological ranges. The parameter identification method is more accurate and significantly faster computationally than commonly used non-linear, non-convex methods. These results verify the model's ability to capture long-term observed glucose-insulin dynamics in hyperglycemic ICU patients, as well as the fitting method developed. Applications of the model and parameter identification method for automated control of blood glucose and medical decision support are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号