首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
With the commoditization of digital devices, personal information and media sharing is becoming a key application on the pervasive Web. In such a context, data annotation rather than data production is the main bottleneck. Metadata scarcity represents a major obstacle preventing efficient information processing in large and heterogeneous communities. However, social communities also open the door to new possibilities for addressing local metadata scarcity by taking advantage of global collections of resources. We propose to tackle the lack of metadata in large-scale distributed systems through a collaborative process leveraging on both content and metadata. We develop a community-based and self-organizing system called PicShark in which information entropy—in terms of missing metadata—is gradually alleviated through decentralized instance and schema matching. Our approach focuses on semi-structured metadata and confines computationally expensive operations to the edge of the network, while keeping distributed operations as simple as possible to ensure scalability. PicShark builds on structured Peer-to-Peer networks for distributed look-up operations, but extends the application of self-organization principles to the propagation of metadata and the creation of schema mappings. We demonstrate the practical applicability of our method in an image sharing scenario and provide experimental evidences illustrating the validity of our approach. The work presented in this article was supported by the Swiss NSF National Competence Center in Research on Mobile Information and Communication Systems (NCCR MICS, grant number 5005-67322), by the EPFL Center for Global Computing as part of the European project NEPOMUK No FP6-027705, and by the Líon project supported by Science Foundation Ireland under Grant No. SFI/02/CE1/I131.  相似文献   

2.
基于校园网络的元计算实验系统WADE的设计与实现   总被引:10,自引:0,他引:10  
元计算系统是可以作为虚拟的整体而使用的地理上分散的异构计算资源,这些资源包括计算机、数据库和昂贵仪器等.元计算系统在硬件和软件方面均有异构性,适合具有不同内在并行性的复杂应用的执行.现存的绝大多数并行系统都是同构的,不具有这一优势,因此,研究异构环境下的元计算系统就很有现实意义.WADE是基于校园网络开发的元计算实验系统,使用MD支持异构数据格式转换,使用面向对象技术实现单一映像系统,使用优先约束的任务调度算法来实现应用程序的调度和运行,并提供与流行的并行编程软件如PVM等的接口。  相似文献   

3.
In recent years business-to-business (B2B) e-commerce has been subject to major rethinking. A paradigm shift can be observed from document centric file-based interchange of business information to process-centric and, finally to service-based information exchange. On a business level, a lot of work has been done to capture business models and collaborative business processes of an enterprise; further initiatives address the identification of customer services and the formalization of business service level agreements (SLA). On a lower, i.e., technical level, the focus is on moving towards service-oriented architectures (SOA). These developments promise more flexibility, a market entry at lower costs and an easier IT-alignment to changing market conditions. This explains the overwhelming quantity of specifications and approaches targeting the area of B2B—these approaches are partly competing and overlapping. In this paper we provide a survey of the most promising approaches at both levels and classify them using the Open-edi reference model standardized by ISO. Whereas on the technical level, service-oriented architecture is becoming the predominant approach, on the business level the landscape is more heterogeneous. In this context, we propose—in line with the services science approach—to integrate business modeling with process modeling in order to make the transformation from business services to Web services more transparent.  相似文献   

4.
Identifier attributes—very high-dimensional categorical attributes such as particular product ids or people's names—rarely are incorporated in statistical modeling. However, they can play an important role in relational modeling: it may be informative to have communicated with a particular set of people or to have purchased a particular set of products. A key limitation of existing relational modeling techniques is how they aggregate bags (multisets) of values from related entities. The aggregations used by existing methods are simple summaries of the distributions of features of related entities: e.g., MEAN, MODE, SUM, or COUNT. This paper's main contribution is the introduction of aggregation operators that capture more information about the value distributions, by storing meta-data about value distributions and referencing this meta-data when aggregating—for example by computing class-conditional distributional distances. Such aggregations are particularly important for aggregating values from high-dimensional categorical attributes, for which the simple aggregates provide little information. In the first half of the paper we provide general guidelines for designing aggregation operators, introduce the new aggregators in the context of the relational learning system ACORA (Automated Construction of Relational Attributes), and provide theoretical justification. We also conjecture special properties of identifier attributes, e.g., they proxy for unobserved attributes and for information deeper in the relationship network. In the second half of the paper we provide extensive empirical evidence that the distribution-based aggregators indeed do facilitate modeling with high-dimensional categorical attributes, and in support of the aforementioned conjectures. Editors: Hendrik Blockeel, David Jensen and Stefan Kramer An erratum to this article is available at .  相似文献   

5.
We propose a goal programming framework that aims at automating e-commerce transactions. This framework consists of three basic layers: deal definition—defining the deal’s parameters and associated constraints (e.g., item, price, delivery dates); deal manipulation—a collection of procedures for shaping deals to attain desired goals (e.g., earliest delivery and minimum price) and an applications layer that employs these procedures within some negotiations settings (e.g., an auction-related application presents a “better offer” while bidding on a contract). Our proposed foundation is rich enough to support a wide array of applications ranging from 1-1 and 1-n negotiations (auctions) to deal valuation and deal splitting. Whereas the techniques are appropriate to a multitude of settings, we shall mainly present them in the context of business-to-business (B2B) commerce where we see the greatest short term benefits. O. Shmueli’s work is partially supported by the Fund for the Promotion of Research at the Technion.  相似文献   

6.
7.
Visual Knowledge Representation and Intelligent Image Segmentation   总被引:1,自引:0,他引:1       下载免费PDF全文
Automatic medical image analysis shows that image segmentation is a crucial task for any practical AI system in this field.On the basis of evaluation of the existing segmentation methods,a new image segmentation method is presented.To seek the perfct solution to knowledge representation in low level machine vision,a new knowledge representation approach--“Notebbok”approach is proposed and the processing of visual knowledge is discussed at all levels.To integrate the computer vision theory with Gestalt psychology and knowledge engineering,a new integrated method for intelligent image segmentation of sonargraphs- “Generalized-pattern guided segmentation”is proposed.With the methods and techniques mentioned above,the medical diagnosis expert system for sonargraphs can be built The work on the preliminary experiments is also introduced.  相似文献   

8.
Increased network speeds coupled with new services delivered via the Internet have increased the demand for intelligence and flexibility in network systems. This paper argues that both can be provided by new hardware platforms comprised of heterogeneous multi-core systems with specialized communication support. We present and evaluate an experimental network service platform that uses an emergent class of devices—network processors—as its communication support, coupled via a dedicated interconnect to a host processor acting as a computational core. A software infrastructure spanning both enables the dynamic creation of application-specific services on the network processor, mediated by middleware and controlled by kernel-level communication support. Experimental evaluations use a Pentium IV-based computational core coupled with an IXP 2400 network processor. The sample application services run on both include an image manipulation application and application-level multicasting.
Karsten SchwanEmail:
  相似文献   

9.
10.
In the last 20 years, several methodologies, models and tools have been developed for the analysis and optimisation of manufacturing systems in order to propose general improvements. Many of these techniques make extensive use of data modelling, simulation, decision-making support, expert systems and reference models. This paper presents the first outcome of a piece of research work to integrate manufacturing process analysis into an integrated modelling framework covering all aspects related to the shop-floor as it really is. The main methodologies and software tools have been identified and evaluated and the results tested on industrial examples. As a result of this evaluation it has been possible to identify the inefficiencies of the techniques. These problems are connected with integrating the different types of data to be analysed—such as quality, time, costs, resource capacity, productivity, flexibility or improvements—into a single analysis environment. The inefficiencies detected enable us to present a general framework for making better use of modelling techniques for manufacturing process analysis. Received July 2005 / Accepted January 2006  相似文献   

11.
Traditional information systems return answers after a user submits a complete query. Users often feel “left in the dark” when they have limited knowledge about the underlying data and have to use a try-and-see approach for finding information. A recent trend of supporting autocomplete in these systems is a first step toward solving this problem. In this paper, we study a new information-access paradigm, called “type-ahead search” in which the system searches the underlying data “on the fly” as the user types in query keywords. It extends autocomplete interfaces by allowing keywords to appear at different places in the underlying data. This framework allows users to explore data as they type, even in the presence of minor errors. We study research challenges in this framework for large amounts of data. Since each keystroke of the user could invoke a query on the backend, we need efficient algorithms to process each query within milliseconds. We develop various incremental-search algorithms for both single-keyword queries and multi-keyword queries, using previously computed and cached results in order to achieve a high interactive speed. We develop novel techniques to support fuzzy search by allowing mismatches between query keywords and answers. We have deployed several real prototypes using these techniques. One of them has been deployed to support type-ahead search on the UC Irvine people directory, which has been used regularly and well received by users due to its friendly interface and high efficiency.  相似文献   

12.
Large-scale tactile sensing applications in Robotics have become the focus of extensive research activities in the past few years, specifically for humanoid platforms. Research products include a variety of fundamentally different robot skin systems. Differences rely in technological (e.g., sensory modes and networking), system-level (e.g., modularity and scalability) and representation (e.g., data structures, coherency and access efficiency) aspects. However, differences within the same robot platform may be present as well. Different robot body parts (e.g., fingertips, forearms and a torso) may be endowed with robot skin that is tailored to meet specific design goals, which leads to local peculiarities as far as technological, system-level and representation solutions are concerned. This variety leads to the issue of designing a software framework able to: (i) provide a unified interface to access information originating from heterogeneous robot skin systems; (ii) assure portability among different robot skin solutions. In this article, a real-time framework designed to address both these issues is discussed. The presented framework, which is referred to as Skinware, is able to acquire large-scale tactile data from heterogeneous networks in real-time and to provide tactile information using abstract data structures for high-level robot behaviours. As a result, tactile-based robot behaviours can be implemented independently of the actual robot skin hardware and body-part-specific features. An extensive validation campaign has been carried out to investigate Skinware’s capabilities with respect to real-time requirements, data coherency and data consistency when large-scale tactile information is needed.  相似文献   

13.
Historically, information systems have been used to improve efficiency through such means as clerical automation, inventory status reporting and transactional processing systems. Today, however, to reduce costs, increase return on investments, and achieve competitive advantage, businesses need to have information systems that support managerial decision-making and result in improved effectiveness. To meet this requirement, new approaches are needed in order to define the right problem and work the problem right. By using such techniques as critical success factor analysis followed by a top down system development approach, developing systems through prototyping and using end-user oriented software, these needs can be met.This article describes several company experiences of using a management systems planning and development process. This process in one company presented an opportunity to test the feasibility of developing an alignment between business goals and events critical to the success of the business. Management believed that to succeed in the future they must be forward thinking in their identification and use of information systems to improve managerial effectiveness. Their questions were “What should we do?” and “How should we do it?” By applying these techniques they were able to achieve outstanding results in a very short period of time.  相似文献   

14.
Advances in computer and communications technologies will provide new capabilities in high resolution image capture, high speed processing of data, high density storage systems, and integration of data, voice and image information, and process cooperation across distributed, heterogeneous systems. Current activities at NIST support US Government requirements for standards and technical assistance to exploit advanced technologies.  相似文献   

15.
Reachability analysis asks whether a system can evolve from legitimate initial states to unsafe states. It is thus a fundamental tool in the validation of computational systems—be they software, hardware, or a combination thereof. We recall a standard approach for reachability analysis, which captures the system in a transition system, forms another transition system as an over-approximation, and performs an incremental fixed-point computation on that over-approximation to determine whether unsafe states can be reached. We show this method to be sound for proving the absence of errors, and discuss its limitations for proving the presence of errors, as well as some means of addressing this limitation. We then sketch how program annotations for data integrity constraints and interface specifications—as in Bertrand Meyer’s paradigm of Design by Contract—can facilitate the validation of modular programs, e.g., by obtaining more precise verification conditions for software verification supported by automated theorem proving. Then we recap how the decision problem of satisfiability for formulae of logics with theories—e.g., bit-vector arithmetic—can be used to construct an over-approximating transition system for a program. Programs with data types comprised of bit-vectors of finite width require bespoke decision procedures for satisfiability. Finite-width data types challenge the reduction of that decision problem to one that off-the-shelf tools can solve effectively, e.g., SAT solvers for propositional logic. In that context, we recall the Tseitin encoding which converts formulae from that logic into conjunctive normal form—the standard format for most SAT solvers—with only linear blow-up in the size of the formula, but linear increase in the number of variables. Finally, we discuss the contributions that the three papers in this special section make in the areas that we sketched above.  相似文献   

16.
This study examines an Emergency Medical Service in order to analyze the composite set of activities and instruments directed at locating the patient. The good management of information about the location of the emergency is highly relevant for a reliable rescue service, but this information depends on knowledge of the territory that is socially distributed between EMS operators and callers. Accordingly, the decision-making process often has to go beyond the emergency service protocols, engaging the operator in undertaking an open negotiation in order to transform the caller’s role from layman to “co-worker”. The patient’s location turns out to be an emerging phenomenon, collaborative work based on knowledge management involving two communities—the callers and the EMS operators—that overlap partially. Drawing examples from emergency calls, the study analyzes the practice of locating a patient as a complex and multi-layered process, highlighting the role played by new and old technologies (the information system and the paper maps) in this activity. We argue that CSCW technologies enable the blended use of different kinds of instruments and support an original interconnection between the professional localization systems and the public’s way of defining a position.  相似文献   

17.
李国良  周煊赫 《软件学报》2020,31(3):831-844
大数据时代下,数据库系统主要面临3个方面的挑战:首先,基于专家经验的传统优化技术(如代价估计、连接顺序选择、参数调优)已经不能满足异构数据、海量应用和大规模用户对性能的需求,可以设计基于学习的数据库优化技术,使数据库更智能;其次,AI时代,很多数据库应用需要使用人工智能算法,如数据库中的图像搜索,可以将人工智能算法嵌入到数据库,利用数据库技术加速人工智能算法,并在数据库中提供基于人工智能的服务;再者,传统数据库侧重于使用通用硬件(如CPU),不能充分发挥新硬件(如ARM、AI芯片)的优势.此外,除了关系模型,数据库需要支持张量模型来加速人工智能操作.为了解决这些挑战,提出了原生支持人工智能(AI)的数据库系统,将各种人工智能技术集成到数据库中,以提供自监控、自配置、自优化、自诊断、自愈、自安全和自组装功能;另一方面,通过使用声明性语言,让数据库提供人工智能功能,以降低人工智能的使用门槛.介绍了实现人工智能原生数据库的5个阶段,并给出了设计人工智能原生数据库的挑战.以自主数据库调优、基于深度强化学习的查询优化、基于机器学习的基数估计和自主索引/视图推荐为例,展示了人工智能原生数据库的优势.  相似文献   

18.
Multidimensional Visualization techniques are invaluable tools for analysis of structured and unstructured data with variable dimensionality. This paper introduces PEx-ImageProjection Explorer for Images—a tool aimed at supporting analysis of image collections. The tool supports a methodology that employs interactive visualizations to aid user-driven feature detection and classification tasks, thus offering improved analysis and exploration capabilities. The visual mappings employ similarity-based multidimensional projections and point placement to layout the data on a plane for visual exploration. In addition to its application to image databases, we also illustrate how the proposed approach can be successfully employed in simultaneous analysis of different data types, such as text and images, offering a common visual representation for data expressed in different modalities.  相似文献   

19.
In recent times, improvements in imaging technology have made available an incredible array of information in image format. While powerful and sophisticated image processing software tools are available to prepare and analyze the data, these tools are complex and cumbersome, requiring significant expertise to properly operate. Thus, in order to extract (e.g., mine or analyze) useful information from the data, a user (in our case a scientist) often must possess both significant science and image processing expertise.This article describes the use of artificial intelligence (AI) planning techniques to represent scientific, image processing and software tool knowledge to automate knowledge discovery and data mining (e.g., science data analysis) of large image databases. In particular, we describe two fielded systems. The Multimission VICAR Planner (MVP) which has been deployed for since 1995 and is currently supporting science product generation for the Galileo mission. MVP has reduced time to fill certain classes of requests from 4 h to 15 min. The Automated SAR Image Processing system (ASIP) was deployed at the Department of Geology at Arizona State University in 1997 to support aeolian science analysis of synthetic aperture radar images. ASIP reduces the number of manual inputs in science product generation by ten-fold.  相似文献   

20.
Mining multi-tag association for image tagging   总被引:1,自引:0,他引:1  
Automatic media tagging plays a critical role in modern tag-based media retrieval systems. Existing tagging schemes mostly perform tag assignment based on community contributed media resources, where the tags are provided by users interactively. However, such social resources usually contain dirty and incomplete tags, which severely limit the performance of these tagging methods. In this paper, we propose a novel automatic image tagging method aiming to automatically discover more complete tags associated with information importance for test images. Given an image dataset, all the near-duplicate clusters are discovered. For each near-duplicate cluster, all the tags occurring in the cluster form the cluster’s “document”. Given a test image, we firstly initialize the candidate tag set from its near-duplicate cluster’s document. The candidate tag set is then expanded by considering the implicit multi-tag associations mined from all the clusters’ documents, where each cluster’s document is regarded as a transaction. To further reduce noisy tags, a visual relevance score is also computed for each candidate tag to the test image based on a new tag model. Tags with very low scores can be removed from the final tag set. Extensive experiments conducted on a real-world web image dataset—NUS-WIDE, demonstrate the promising effectiveness of our approach.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号