首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
医疗物联网及移动医疗应用中多种传感器采集的生命体征数据,以及各类健康医疗数据彼此之间存在语义异构性,导致智能医疗物联设备数据融合困难。针对这一问题,研究了基于开放关联数据的语义消歧方法。首先对设备数据进行本体建模,形成局部本体;然后利用图匹配算法将局部本体与开放医疗关联数据进行概念对齐,间接消除异源数据间的语义异构性;最后,在运动手环与体重计数据融合实验中,通过与开放关联数据源的关联匹配判定血压和体重等异构概念属于语义相关概念。实验结果表明,通过与开放关联数据源关联,可以实现局部本体语义扩展,进一步实现异源医联网设备数据融合。  相似文献   

3.
钟将  宋娟 《计算机工程》2011,37(14):44-46
针对电力系统数据集成中存在的语义异构问题,提出一种基于本体的数据集成框架。依据电力参数估计系统的数据需求模型,分析数据集成存在的语义冲突类型,在传统数据集成框架的数据集成中间件模块中加入本体语义描述模块。采用本体描述信息资源域中的概念,通过实现语义冲突主动发现并构建语义映射关系。实验结果表明,该框架能有效解决数据集成过程中的语义异构问题。  相似文献   

4.
本体是由特定信息领域中的相关术语集合及这些术语之间的关联所组成的,是语义丰富的元数据,通过它可以获取关于底层数据库的相关信息。基于现有的地理数据库和已创建的地理信息领域本体,建立了适合于地理数据集的应用本体数据库;通过描述逻辑指定相应的规则知识,在空间数据库和本体数据库之间、本体库之间分别建立一定的关联关系,提出了本体驱动的时空数据查询方法。当需要对某个时空实体对象进行查询时,通过在本体数据库中进行的逻辑运算,从而得到查询结果,最后返回查询结果。并以数字烟草中烟草的种植查询为例,验证了该方法的可行性和有效性。  相似文献   

5.
The labeling of the regions of a segmented image according to a semantic representation (ontology) is usually associated with the notion of understanding. The high combinatorial aspect of this problem can be reduced with local checking of constraints between the elements of the ontology. In the classical definition of Finite Domain Constraint Satisfaction Problem, it is assumed that the matching problem between regions and labels is bijective. Unfortunately, in image interpretation the matching problem is often non-univocal. Indeed, images are often over-segmented: one object is made up of several regions. This non-univocal matching between data and a conceptual graph was not possible until a decisive step was accomplished by the introduction of arc consistency with bilevel constraint (FDCSPBC). However, this extension is only adequate for a matching corresponding to surjective functions. In medical image analysis, the case of non-functional relations is often encountered, for example, when an unexpected object like a tumor appears. In this case, the data cannot be mapped to the conceptual graph, with a classical approach. In this paper we propose an extension of the FDCSPBC to solve the constraint satisfaction problem for non-functional relations.  相似文献   

6.
This paper presents the development of a network of ontology networks that enables data mediation between the Employment Services (ESs) participating in a semantic interoperability platform for the exchange of Curricula Vitae (CVs) and job offers in different languages. Such network is formed by (1) a set of local ontology networks that are language dependent, in which each network represents the local and particular view that each ES has of the employment market; and (2) a reference ontology network developed in English that represents a standardized and agreed upon terminology of the European employment market. In this network each local ontology network is aligned with the reference ontology network so that search queries, CVs, and job offers can be mediated through these alignments from any ES. The development of the ontologies has followed the methodological guidelines issued by the NeOn Methodology and is focused mainly on scenarios that involve reusing and re-engineering knowledge resources already agreed upon by employment experts and standardization bodies. This paper explains how these methodological guidelines have been applied for building e-employment ontologies. In addition, it shows that the approach to building ontologies by reusing and re-engineering agreed upon non-ontological resources speeds the ontology development, reduces development costs, and retrieves knowledge already agreed upon by a community of people in a more formal representation.  相似文献   

7.
Providing citizens with reliable, up-to-date and individually relevant health information on the Web is done by governmental, non-governmental, business and other organizations. Currently the information is published with little co-ordination and co-operation between the publishers. For publishers, this means duplicated work and costs due to publishing same information twice on many websites. Also maintaining links between websites requires work. From the citizens point of view, finding content is difficult due to e.g. differences in layman’s vocabularies compared to medical terminology and difficulties in aggregating information from several sites.To solve these problems, we present a national scale semantic publishing system HealthFinland which consists of (1) a centralized content infrastructure of health ontologies and services with tools, (2) a distributed semantic content creation channel based on several health organizations, and (3) an intelligent semantic portal aggregating and presenting the contents from intuitive and health promoting end-user perspectives for human users as well as for other websites and portals.  相似文献   

8.
In this paper, we proposed a new approach using ontology to improve precision of terminology extraction from documents. Firstly, a linguistic method was used to extract the terminological patterns from documents. Then, similarity measures within the framework of ontology were employed to rank the semantic dependency of the noun words in a pattern. Finally, the patterns at a predefined proportion according to their semantic dependencies were retained and regarded as terminologies. Experiments on Retuers-21578 corpus has shown that WordNet ontology, that we adopted for the task of extracting terminologies from English documents, can improve the precision of classical linguistic method on terminology extraction significantly.  相似文献   

9.
In recent years, researchers have been developing algorithms for the automatic mapping and merging of ontologies to meet the demands of interoperability between heterogeneous and distributed information systems. But, still state-of-the-art ontology mapping and merging systems is semi-automatic that reduces the burden of manual creation and maintenance of mappings, and need human intervention for their validation. The contribution presented in this paper makes human intervention one step more down by automatically identifying semantic inconsistencies in the early stages of ontology merging. We are detecting semantic heterogeneities that occur due to conflicts among the set of Generalized Concept Inclusions, Property Subsumption Criteria, and Constraint Satisfaction Mechanism in local heterogeneous ontologies, which become obstacles for the generation of semantically consistent global merged ontology. We present several algorithms to detect such semantic inconsistencies based on subsumption analysis of concepts and properties in local ontologies from the list of initial mappings. We provide ontological patterns for resolving these inconsistencies automatically. This results global merged ontology free from ??circulatory error in class/property hierarchy??, ??common class between disjoint classes/properties??, ??redundancy of subclass/subproperty of relations?? and other types of ??semantic inconsistency?? errors. Experiments on the real ontologies show that our algorithms save time and cost of traversing local ontologies, improve system??s performance by producing only consistent accurate mappings, and reduce the users?? dependability for ensuring the satisfiability of merged ontology.  相似文献   

10.
文中介绍了数据仓库领域一种基于本体的语义集成方法。首先建立领域本体和数据源的局部本体,然后通过局部本体对应的概念树间的映射算法得到数据源全局本体,再和领域本体映射,得到映射关系。最后通过本体推理,得出隐含的语义关系,用最终的语义关系来指导数据抽取、转换和加载过程,实现数据仓库语义程度上的数据集成。  相似文献   

11.
基于本体的网格资源匹配算法研究   总被引:1,自引:0,他引:1  
由于网格动态异构等特点,传统的基于资源属性的精确匹配方法不够灵活,并且扩展性差。近年采,本体论引入到网格计算中,试图在语义层次上采用通用可扩展的信息系统建模工具,使得网格资源分配系统能够高效和精确地检索网格资源信息。通常采用的技术是建立和维护一个集中和一致的网格资源本体。这种集中的本体不适合具有分布式特性的P2P网格资源匹配。针对P2P网格,本文提出了一种基于分布式网格本体的P2P网格资源匹配模型。在该模型中,全局本体由各个节点的独立的本地网格资源本体构成。网格资源匹配操作完全分布式地由节点自主控制。这种方法可扩展性强,更适合p2p网格的资源匹配。  相似文献   

12.
在利用本体进行信息抽取的基础上,提出了一个基于个人信息领域的语义信息抽取系统框架,将语义抽取从WEB领域扩展到个人信息领域;系统对个人信息领域内的网页,电子邮件,本地数据库和本地文件夹建立本体,根据本体之间的语义关联,实现个人信息领域内数据的交流。系统详细描述了语义信息抽取系统的实现过程,并以电子邮件为例重点介绍了语义信息抽取的算法。  相似文献   

13.
计算机可理解的统一信息模型是基于语义的医学影像检索研究的数据基础. 讨论了医学影像及其相关信息使用中存在的数据异构、图像标注术语及语法不一致及数据格式不支持现有数据挖掘和图像语义检索的问题, 提出了一种基于本体的医学影像信息集成方案. 在分析医学影像信息来源及其关系基础上, 结合领域专家知识, 使用斯坦福大学提出的本体构建“七步法”设计了医学影像信息本体模型, 实现了本体模型的持久化、原始数据提取和数据整合, 解决了医学影像信息使用中存在的问题, 该信息模型已用于医学影像检索系统中.  相似文献   

14.
Information sources such as relational databases, spreadsheets, XML, JSON, and Web APIs contain a tremendous amount of structured data that can be leveraged to build and augment knowledge graphs. However, they rarely provide a semantic model to describe their contents. Semantic models of data sources represent the implicit meaning of the data by specifying the concepts and the relationships within the data. Such models are the key ingredients to automatically publish the data into knowledge graphs. Manually modeling the semantics of data sources requires significant effort and expertise, and although desirable, building these models automatically is a challenging problem. Most of the related work focuses on semantic annotation of the data fields (source attributes). However, constructing a semantic model that explicitly describes the relationships between the attributes in addition to their semantic types is critical.We present a novel approach that exploits the knowledge from a domain ontology and the semantic models of previously modeled sources to automatically learn a rich semantic model for a new source. This model represents the semantics of the new source in terms of the concepts and relationships defined by the domain ontology. Given some sample data from the new source, we leverage the knowledge in the domain ontology and the known semantic models to construct a weighted graph that represents the space of plausible semantic models for the new source. Then, we compute the top k candidate semantic models and suggest to the user a ranked list of the semantic models for the new source. The approach takes into account user corrections to learn more accurate semantic models on future data sources. Our evaluation shows that our method generates expressive semantic models for data sources and services with minimal user input. These precise models make it possible to automatically integrate the data across sources and provide rich support for source discovery and service composition. They also make it possible to automatically publish semantic data into knowledge graphs.  相似文献   

15.
SPHeRe     
The abundance of semantically related information has resulted in semantic heterogeneity. Ontology matching is among the utilized techniques implemented for semantic heterogeneity resolution; however, ontology matching being a computationally intensive problem can be a time-consuming process. Medium to large-scale ontologies can take from hours up to days of computation time depending upon the utilization of computational resources and complexity of matching algorithms. This delay in producing results, makes ontology matching unsuitable for semantic web-based interactive and semireal-time systems. This paper presents SPHeRe, a performance-based initiative that improves ontology matching performance by exploiting parallelism over multicore cloud platform. Parallelism has been overlooked by ontology matching systems. SPHeRe avails this opportunity and provides a solution by: (i) creating and caching serialized subsets of candidate ontologies with single-step parallel loading; (ii) lightweight matcher-based and redundancy-free subsets result in smaller memory footprints and faster load time; and (iii) implementing data parallelism based distribution over subsets of candidate ontologies by exploiting the multicore distributed hardware of cloud platform for parallel ontology matching and execution. Performance evaluation of SPHeRe on a trinode (12-core) private cloud infrastructure has shown up to 3 times faster ontology load time with up to 8 times smaller memory footprint than Web Ontology Language (OWL) frameworks Jena and OWLAPI. Furthermore, by utilizing the computation resources most efficiently, SPHeRe provides the best scalability in contrast with other ontology matching systems, i.e., GOMMA, LogMap, AROMA, and AgrMaker. On a private cloud instance with 8 cores, SPHeRe outperforms the most performance efficient ontology matching system GOMMA by 40 % in scalability and 4 times in performance.  相似文献   

16.
It is natural for ontologies to evolve over time. These changes could be at structural and semantic levels. Due to changes to an ontology, its data instances may become invalid, and as a result, may become non-interpretable. In this paper, we address precisely this problem, validity of data instances due to ontological evolution. Towards this end, we make the following three novel contributions to the area of Semantic Web. First, we propose formal notions of structural validity and semantic validity of data instances, and then present approaches to ensure them. Second, we propose semantic view as part of an ontology, and demonstrate that it is sufficient to validate a data instance against the semantic view rather than the entire ontology. We discuss how the semantic view can be generated through an implication analysis, i.e., how semantic changes to one component imply semantic changes to other components in the ontology. Third, we propose a validity identification approach that employs locally maintaining a hash value of the semantic view at the data instance.  相似文献   

17.
Key concept extraction is a major step for ontology learning that aims to build an ontology by identifying relevant domain concepts and their semantic relationships from a text corpus. The success of ontology development using key concept extraction strongly relies on the degree of relevance of the key concepts identified. If the identified key concepts are not closely relevant to the domain, the constructed ontology will not be able to correctly and fully represent the domain knowledge. In this paper, we propose a novel method, named CFinder, for key concept extraction. Given a text corpus in the target domain, CFinder first extracts noun phrases using their linguistic patterns based on Part-Of-Speech (POS) tags as candidates for key concepts. To calculate the weights (or importance) of these candidates within the domain, CFinder combines their statistical knowledge and domain-specific knowledge indicating their relative importance within the domain. The calculated weights are further enhanced by considering an inner structural pattern of the candidates. The effectiveness of CFinder is evaluated with a recently developed ontology for the domain of ‘emergency management for mass gatherings’ against the state-of-the-art methods for key concept extraction including—Text2Onto, KP-Miner and Moki. The comparative evaluation results show that CFinder statistically significantly outperforms all the three methods in terms of F-measure and average precision.  相似文献   

18.
UML和OWL在本体建模中的比较研究   总被引:1,自引:0,他引:1  
作为语义Web的基础,提供共享概念模型的本体扮演了重要的角色。然而目前的本体开发工具和技术是建立在AI领域KIF和KL-ONE的基础上,难以理解和掌握。通过介绍UML和OWL的基本概念和建模原语,结合建模实例分析它们在本体建模中的开发原理,并对两者在本体建模过程的方法进行分析评价,可以看出将UML应用于本体的开发,能提供标准直观的统一建模过程,提供方便的沟通和理解途径,对本体开发有较高的现实意义和理论价值。  相似文献   

19.
语义Web服务是应用语义Web技术对Web服务的扩展.使信息具有语义就是用计算机内的Ontology中的概念作标记符对信息进行标记,对该过程予以支持的就是语义Web技术,即Ontology的构建技术、Ontology的使用技术(语义推理技术)和信息的语义标记技术.语义Web技术对Web服务的扩展可具体化为两项任务:服务提供者、服务请求者和服务注册处三类服务主体均内置Ontology;发布、查找和绑定三种交互信息均采用语义标记.  相似文献   

20.
Context: Definition of a comprehensive facility data model is a prerequisite for providing more advanced energy management systems capable of tackling the underlying heterogeneity of complex infrastructures, thus providing more flexible data interpretation and event management, advanced communication and control system capabilities. Objective: This paper proposes one of the possible implementations of a facility data model utilizing the concept of ontology as part of the contemporary Semantic Web paradigm. Method: The proposed facility ontology model was defined and developed to model all the static knowledge (such as technical vendor data, proprietary data types, and communication protocols) related to the significant energy consumers of the target infrastructure. Furthermore, this paper describes the overall methodology and how the common semantics offered by the ontology were utilized to improve the interoperability and energy management of complex infrastructures. Initially, a core facility ontology, which represents the generic facility model providing the general concepts behind the modelling, was defined. Results: In order to develop a full-blown model of the specific facility infrastructure, Malpensa and Fiumicino airports in Italy were taken as a test-bed platform in order to develop the airport ontology owing to the variety of the technical systems installed at the site. For the development of the airport ontology, the core facility ontology was first extended and then populated to reflect the actual state of the target airport facility. Conclusion: The developed ontology was tested in the environment of the two pilots, and the proposed solution proved to be a valuable link between separate ICT systems involving equipment from various vendors, both on syntax and semantic level, thus offering the facility managers the ability to retrieve high-level information regarding the performance of significant energy consumers.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号