首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We propose a service replication framework for unreliable networks. The service exhibits the same consistency guarantees about the order of execution of operation requests as its non‐replicated implementation. Such guarantees are preserved in spite of server replica failure or network failure (either between server replicas or between a client and a server replica), and irrespective of when the failure occurs. Moreover, the service guarantees that in the case when a client sends an ‘update’ request multiple times, there is no risk that the request be executed multiple times. No hypotheses about the timing retransmission policy of clients are made, e.g. the very same request might even arrive at different server replicas simultaneously. All of these features make the proposed framework particularly suitable for interaction between remote programs, a scenario that is gaining increasing importance. We discuss a prototype implementation of our replication framework based on Tomcat, a very popular Java‐based Web server. The prototype comes into two flavors: replication of HTTP client session data and replication of a counter accessed as a Web service. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

2.
《Computer Networks》1999,31(11-16):1331-1345
This paper discusses how to augment the World Wide Web with an open hypermedia service (Webvise) that provides structures such as contexts, links, annotations, and guided tours stored in hypermedia databases external to the Web pages. This includes the ability for users collaboratively to create links from parts of HTML Web pages they do not own and support for creating links to parts of Web pages without writing HTML target tags. The method for locating parts of Web pages can locate parts of pages across frame hierarchies and it also supports certain repairs of links that break due to modified Web pages. Support for providing links to/from parts of non-HTML data, such as sound and movie, will be possible via interfaces to plug-ins and Java-based media players.The hypermedia structures are stored in a hypermedia database, developed from the Devise Hypermedia framework, and the service is available on the Web via an ordinary URL. The best user interface for creating and manipulating the structures is currently provided for the Microsoft Internet Explorer 4.x browser through COM integration that utilizes the Explorer's DOM representation of Web-pages. But the structures can also be manipulated and used via special Java applets and a pure proxy server solution is provided for users who only need to browse the structures. A user can create and use the external structures as `transparency' layers on top of arbitrary Web pages, the user can switch between viewing pages with one or more layers (contexts) of structures or without any external structures imposed on them.  相似文献   

3.
针对当前宠物频繁丢失的问题,本文设计并实现了基于MQTT协议的智能宠物追踪系统.系统的功能实现主要依托于MQTT服务器、Web服务器和基于Android平台开发的应用程序.嵌入式设备采用集成MTK2503芯片组的WZ-203CS开发板,依靠开发板内嵌的物联网卡,通过MQTT消息传输协议,将位置信息传送至Android应用程序.应用程序通过集成高德地图导航功能,利用已接收的位置信息,提供语音导航、路径规划等功能.基于该架构设计的追踪系统,并不局限于宠物追踪,通过系统预留的接口可扩展其他相关功能,实现可用性的同时具有较强的可扩展性.  相似文献   

4.
The World Wide Web (the Web for short) is rapidly becoming an information flood as it continues to grow exponentially. This causes difficulty for users to find relevant pieces of information on the Web. Search engines and robots (spiders) are two popular techniques developed to address this problem. Search engines are indexing facilities over searchable databases. As the Web continues to expand, search engines are becoming redundant because of the large number of Web pages they return for a single search. Robots are similar to search engines; rather than indexing the Web, they traverse (“walk through”) the Web, analyzing and storing relevant documents. The main drawback of these robots is their high demand on network resources that results in networks being overloaded. This paper proposes an alternate way in assisting users in finding information on the Web. Since the Web is made up of many Web servers, instead of searching all the Web servers, we propose that each server does its own housekeeping. A software agent named SiteHelper is designed to act as a housekeeper for the Web server and as a helper for a Web user to find relevant information at a particular site. In order to assist the Web user finding relevant information at the local site, SiteHelper interactively and incrementally learns about the Web user's areas of interest and aids them accordingly. To provide such intelligent capabilities, SiteHelper deploys enhanced HCV with incremental learning facilities as its learning and inference engines.  相似文献   

5.
We describe the design of a system for fast and reliable HTTP service which we call Web++. Web++ achieves high reliability by dynamically replicating web data among multiple web servers. Web++ selects the available server that is expected to provide the fastest response time. Furthermore, Web++ guarantees data delivery given that at least one server containing the requested data is available. After detecting a server failure, Web++ client requests are satisfied transparently to the user by another server. Furthermore, the Web++ architecture is flexible enough for implementing additional performance optimizations. We describe implementation of one such optimization, batch resource transmission, whereby all resources embedded in an HTML page that are not cached by the client are sent to the client in a single response. Web++ is built on top of the standard HTTP protocol and does not require any changes either in existing web browsers or the installation of any software on the client side. In particular, Web++ clients are dynamically downloaded to web browsers as signed Java applets. We implemented a Web++ prototype; performance experiments indicate that the Web++ system with 3 servers improves the response time perceived by clients on average by 36.6%, and in many cases by as much as 59%, when compared with the current web performance. In addition, we show that batch resource transmission can improve the response time on average by 39% for clients with fast network connections and 21% for the clients with 56 Kb modem connections. This revised version was published online in August 2006 with corrections to the Cover Date.  相似文献   

6.
7.
ABSTRACT

Market research surveys report that 75% of hacks occur at the application layer. Of the multiple vulnerabilities that exist in Web application software, proper authentication of the client and the server to each other is fundamental to the security of the system. In the current scenario, we manage this with the adoption of password-based client authentication and PKI-based server authentication. There exist unresolved vulnerabilities in this system due to the misuse of the client's passwords (impersonation) by those managing the servers. The clients' trust of the server based on the certificates issued by an increasing number of certification authorities is questionable in terms of validity and freshness. For proper authentication in Web applications, we need to verify two conditions: 1) the binding of the identity of the entity with the publicly known name or key and 2) the entity does possess the corresponding private key for the identified public key. In this paper, we use the elliptic curve discrete log problem-based version of classical zero knowledge protocol for proving number 2 and modifications of the existing schemes for proving number 1. We have done a prototype implementation of the solution and security analysis required to satisfy the security objectives.  相似文献   

8.
The Web is increasingly used for critical applications and services. We present a client-transparent mechanism, called CoRAL, that provides high reliability and availability for Web service. CoRAL provides fault tolerance even for requests being processed at the time of server failure. The scheme does not require deterministic servers and can thus handle dynamic content. CoRAL actively replicates the TCP connection state while maintaining logs of HTTP requests and replies. In the event of a primary server failure, active client connections fail over to a spare, where their processing continues seamlessly. We describe key aspects of the design and implementation as well as several performance optimizations. Measurements of system overhead, failover performance, and preliminary validation using fault injection are presented.  相似文献   

9.
This paper describes the tools which are being evaluated at the University of Leeds for use by information providers on the World Wide Web. The paper also gives an introduction to the World Wide Web's client/server architecture and the Hypertext Markup Language (HTML). Information is provided on further sources of information which will assist information providers and trainers of information providers.The paper is intended for new information providers on the World Wide Web and for people who are involved in their training.  相似文献   

10.
The ubiquity of Web browsers makes them an ideal generic front end for simple client-server systems. A very suitable area of application is controlling embedded systems, such as network printers, where supporting standard Web browsers is a cost-effective and convenient alternative to developing custom client software for remote administration from different platforms.This paper describes the design and implementation of a flexible communication server to be run directly on the embedded system. It supports different protocols to allow remote access, including HTTP. Thus, the embedded system can be accessed with any Web browser. Its state is represented as a set of Web pages containing dynamically generated information. Java applets included in these Web pages can connect back to the server to subscribe to live data feeds for real-time visualization of the embedded system's state. A GUI builder implemented as a Java applet can be used to customize the visual appearance of these applets.  相似文献   

11.
12.
This paper discusses a navigation behavior on Internet information services, in particular the World Wide Web, which is characterized by pointing out information using various communication tools. We call this behaviorsocial navigationas it is based on communication and interaction with other users, be it through email, or any other means of communication. Social navigation phenomena are quite common although most current tools (like web browsers or email clients) offer very little support for it. We describe why social navigation is useful and how it can be supported better in future systems. We further describe two prototype systems that, although originally not designed explicitly as tools for social navigation, provide features that are typical for social navigation systems. One of these systems, the Juggler system, is a combination of a textual virtual environment and a web client. The other system is a prototype of a web-hotlist organizer, called Vortex. We use both systems to describe fundamental principles of social navigation systems.  相似文献   

13.
Finding specific information in the World-Wide Web (WWW, or Web for short) is becoming increasingly difficult, because of the rapid growth of the Web and because of the diversity of the information offered through the Web. Hypertext in general is ill-suited for information retrieval as it is designed for stepwise exploration. To help readers find specific information quickly, specific overview documents are often included into the hypertext.Hypertext systems often provide simple searching tools such as full text search or title search, that mostly ignore the “hyper-structure” formed by the links.In the WWW, finding information is further complicated by its distributed nature. Navigation, often via overview documents, still is the predominant method of finding one's way around the Web.Several searching tools have been developed, basically in two types:
  • •A gateway, offering (limited) search operations on small or large parts of the WWW, using a pre-compiled database. The database is often built by an automated Web scanner (a “robot”).
  • •A client-based search tool that does automated navigation, thereby working more or less like a browsing user, but much faster and following an optimized strategy.
This paper highlights the properties and implementation of a client-based search tool called the “fish-search” algorithm, and compares it to other approaches. The fish-search, implemented on top of Mosaic for X, offers an open-ended selection of search criteria.Client-based searching has some definite drawbacks: slow speed and high network resource consumption. The paper shows how combining the fish search with a cache greatly reduces these problems. The “Lagoon” cache program is presented. Caches can call each other, currently only to further reduce network traffic. By moving the algorithm into the cache program, the calculation of the answer to a search request can be distributed among the caching servers.  相似文献   

14.
私有信息检索(PIR)是一种密码学工具,使用户能够从远程数据库服务器中获取信息,而不会让服务器知道用户获取了什么信息。PIR方案基本上由两种类型组成,即信息论私有信息检索(Information Theoretic PIR,IT-PIR)和计算性私有信息检索(Computational PIR,C-PIR)。IT-PIR方案要求服务器之间不共谋。一旦服务器共谋,就无法保证用户的隐私。共谋问题一直以来都没有一个比较好的解决办法。比特币和区块链的出现为解决公平和信任的问题提供了一种新方法。在本文中,我们创新地使用区块链来处理IT-PIR中的共谋问题,提出了一种基于比特币的PIR支付协议。在此支付协议中,客户通过比特币交易支付服务费。我们通过比特币脚本控制交易兑现的条件,使得如果服务方相互串通,则使服务方受到利益损失。通过这种方式,该支付协议可以在一定程度上降低共谋的可能性。  相似文献   

15.
基于Google地图API的空间信息发布   总被引:2,自引:1,他引:1  
周宇林  付忠良 《计算机应用》2011,31(5):1450-1452
传统的在线地图服务具有一定的局限性--仅支持客户浏览和查询,为了实现客户端自主录入数据,服务器端接收空间信息并发布到在线地图的目标,提出了一种构建空间信息发布系统的新技术。该技术基于B/S模式架构,通过改进Google Maps API的事件侦听器,自动获取标注处地理坐标,服务器端使用一种自定义的XML文件读取录入数据,然后利用地址解析函数解析该XML文件,将含有位置信息的数据标注到Google地图上,从而实现了本地属性数据在Web地图上的发布。基于该方法成功开发了武汉大学校园导航系统,验证了该方法的可行性。  相似文献   

16.
由于目前的内容寻址存储系统在应用时存在很大的问题,提出基于标准HTTP协议开发CAS存储接口,将文件操作映射为URI资源的标签语义,实现基于Web的文件存取和操作;并借助Web服务器和数据库建立功能强大的CAS客户端,基于元数据模型并结合数据库对文件对象进行描述,通过Web界面进行文件对象浏览和搜索,构建具有强大内容导航和搜索能力的对象存储系统.  相似文献   

17.
用SQUID架构本地信息代理服务器   总被引:1,自引:0,他引:1  
仿照电视广播里的插播模式,我们设想了一个活动的代理中心结构作为支持本地信息高效传送的手段,当网页通过与内容服务器协作的动态代理服务器被取回的时候,当地信息将基于需要灵活地被插入到网页中。本文介绍了用Squid—based架构的代理服务器上的信息传送的设计和各种功能,这种方案的操作对于网页客户与内容提供商来说完全透明的,是切实可行的。  相似文献   

18.
Web Service在分布式数据库查询中的应用   总被引:3,自引:0,他引:3  
安蓓  赵政 《微处理机》2004,25(5):55-58
本文对Web Service的理论和技术等基本特征进行了探讨,对Web Service的相关技术XML和SOAP等进行了介绍,利用该技术的跨平台可互操作性构建了一个分布式人员信息数据库查询模型。由于Web Service很好的体现了客户端和服务器端操作的透明性,用户只要在客户端输入一定的查询条件而不需要知道对应的数据库的位置和结构,也不需知道服务器是怎么实现的。客户和服务器的耦合程度很低,服务器暴露出远程对象的接口,而客户端就好像在使用本地使用这些对象的接口一样,实现了不同平台之间的互操作性。从而证实Web Service为进行分布式数据库查询提供了强有力的支撑技术。  相似文献   

19.
Mobile surveillance service is regarded as one of the Internet applications to which much attention is recently given. However, the time and cost problem resulting from using heterogeneous platforms and proprietary protocols must be a burden to developing such systems and expanding their services. In this paper, we present a framework of mobile surveillance service for smartphone users. It includes the design and implementation of a video server and a mobile client called smartphone watch. A component-based architecture is employed for the server and client for easy extension and adaptation. We also employ the well-known standard web protocol HTTP to provide higher compatibility and portability than using a proprietary one. Three different video transmission modes are provided for efficient usage of limited bandwidth resource. We demonstrate our approach via real experiments on a commercial smartphone.  相似文献   

20.
The representation of large subsets of the World Wide Web in the form of a directed graph has been extensively used to analyze structure, behavior, and evolution of those so-called Web graphs. However, interesting Web graphs are very large and their classical representations do not fit into the main memory of typical computers, whereas the required graph algorithms perform inefficiently on secondary memory. Compressed graph representations drastically reduce their space requirements while allowing their efficient navigation in compressed form. While the most basic navigation operation is to retrieve the successors of a node, several important Web graph algorithms require support for extended queries, such as finding the predecessors of a node, checking the presence of a link, or retrieving links between ranges of nodes. Those are seldom supported by compressed graph representations.This paper presents the k2-tree, a novel Web graph representation based on a compact tree structure that takes advantage of large empty areas of the adjacency matrix of the graph. The representation not only retrieves successors and predecessors in symmetric fashion, but also it is particularly efficient to check for specific links between nodes, or between ranges of nodes, or to list the links between ranges. Compared to the best representations in the literature supporting successor and predecessor queries, our technique offers the least space usage (1–3 bits per link) while supporting fast navigation to predecessors and successors (28μs per neighbor retrieved) and sharply outperforming the others on the extended queries. The representation is also of general interest and can be used to compress other kinds of graphs and data structures.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号