首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 734 毫秒
1.
<美国专利>6,499,026Rivette, et al.December 24,2002<发明人>  相似文献   

2.
3.
《Computer Communications》1987,10(5):256-262
An integrated broadband video and data network has been installed by the University of London to link seven isolated sites across London to allow simultaneous lectures, tutorials and meetings. British Telecom has provided the hardware while the software was developed by the University. It was decided that a switched star configuration would offer the best service to the individual sites so that each site can gain access to any or all of the other sites. Each fibre link in the network has four video channels and a high speed digital data channel.  相似文献   

4.
In network data analysis, research about how accurate the estimation model represents the universe is inevitable. As the speed of the network increases, so will the attacking methods on future generation communication network. To correspond to these wide variety of attacks, intrusion detection systems and intrusion prevention systems also need a wide variety of counter measures. As a result, an effective method to compare and analyze network data is needed. These methods are needed because when a method to compare and analyze network data is effective, the verification of intrusion detection systems and intrusion prevention systems can be trusted.In this paper, we use extractable standard protocol information of network data to compare and analyze the data of MIT Lincoln Lab with the data of KDD CUP 99 (modeled from Lincoln Lab). Correspondence Analysis and statistical analyzing method is used for comparing data.  相似文献   

5.
Methods and apparatuses for providing cryptographic assurance based on ranges as to whether a particular data item is on a list. According to one computer-implemented method, the items on the list are sorted and ranges are derived from adjacent pairs of data items on the list. Next, cryptographically manipulated data is generated from the plurality of ranges. At least parts of the cryptographically manipulated data is transmitted onto a network for use in cryptographically demonstrating whether any given data item is on the list. According to another computer-implemented method, a request message is received requesting whether a given data item is on a list of data items, fn response, a range is selected that is derived from the pair of data items on the list that define the smallest range that includes the given data item. A response message is transmitted that cryptographically demonstrates whether the first data item is on the list using cryptographically manipulated data derived from the range. Acco  相似文献   

6.
We propose an integrated learning and planning framework that leverages knowledge from a human user along with prior information about the environment to generate trajectories for scientific data collection in marine environments. The proposed framework combines principles from probabilistic planning with nonparametric uncertainty modeling to refine trajectories for execution by autonomous vehicles. These trajectories are informed by a utility function learned from the human operator’s implicit preferences using a modified coactive learning algorithm. The resulting techniques allow for user-specified trajectories to be modified for reduced risk of collision and increased reliability. We test our approach in two marine monitoring domains and show that the proposed framework mimics human-planned trajectories while also reducing the risk of operation. This work provides insight into the tools necessary for combining human input with vehicle navigation to provide persistent autonomy.  相似文献   

7.
This article demonstrates that the Bellcore OSCA architecture can serve as a basis for the metaarchitecture for software architectures that must support interoperability among functionality that traditionally has resided in operation systems and functionality that traditionally has resided in network elements (NE) (i.e., interoperability among operations functionality and network functionality). The need for this interoperability is driven by such telephone company business needs, as the need for customers to access operations capabilities spanning operations systems and NEs, the need for new service offerings to span operations systems and NEs, the need for a flexible environment for service development, and the need to manage all corporate data as a company resource. As a result, it is becoming beneficial to apply interoperability requirements to the network functionality that interfaces with operations systems. Therefore, it is reasonable to apply the OSCA architecture to network functionality. This article applies the OSCA architecture interoperability principles of separation of concerns to current and emerging network functionality. It demonstrates that this functionality can be partitioned among the three OSCA architecture layers of corporate data, processing, and user, and that there are a number of benefits to applying the OSCA interoperability principles to network functionality.  相似文献   

8.
Support Vector Machines (SVM) is becoming a popular alternative to traditional image classification methods because it makes possible accurate classification from small training samples. Nevertheless, concerns regarding SVM parameterization and computational effort have arisen. This Letter is an evaluation of an automated SVM‐based method for image classification. The method is applied to a land‐cover classification experiment using a hyperspectral dataset. The results suggest that SVM can be parameterized to obtain accurate results while being computationally efficient. However, automation of parameter tuning does not solve all SVM problems. Interestingly, the method produces fuzzy image‐regions whose contextual properties may be potentially useful for improving the image classification process.  相似文献   

9.
Burnt area data, derived from the National Oceanic and Atmospheric Administration (NOAA) Advanced Very High Resolution Radiometer (AVHRR) imagery, are validated in 11 regions of arid and semi-arid Australia, using three separate Landsat-derived burnt area data sets. Mapping accuracy of burnt extent is highly variable between areas and from year to year within the same area. Where there are corresponding patches in the AVHRR and Landsat data sets, the fit is good. However, the AVHRR data set misses some large patches. Overall, 63% of the Landsat burnt area is also mapped in the AVHRR data set, but this varies from 0% to 89% at different sites. In total, 81% of the AVHRR burnt area data are matched in the Landsat data set, but range from 0% to 94%. The lower match rates (<50%) are generally when little area has burnt (0–500 km2), with figures generally better in the more northerly sites. Results of regressions analysis based on 10 km?×?10 km cells are also variable, with R 2 values ranging from 0.37 (n?=?116) to 0.94 (n?=?85). For the Tanami Desert scene, R 2 varies from 0.41 to 0.61 (n?=?368) over three separate years. Combining the data results in an R 2 of 0.60 (n?=?1315) (or 0.56 with the intercept set to 0). The slopes of the regressions indicate that mapping the burnt area from AVHRR imagery underestimates the ‘true’ extent of burning for all scenes and years. Differences in mapping accuracy between low and high fire years are examined, as well as the influence of soil, vegetation, land use and tenure on mapping accuracy. Issues which are relevant to mapping fire in arid and semi-arid environments and discontinuous fuels are highlighted.  相似文献   

10.
The Big Data era has descended on many communities, from governments and e-commerce to health organizations. Information systems designers face great opportunities and challenges in developing a holistic big data research approach for the new analytics savvy generation. In addition business intelligence is largely utilized in the business community and thus can leverage the opportunities from the abundant data and domain-specific analytics in many critical areas. The aim of this paper is to assess the relevance of these trends in the current business context through evidence-based documentation of current and emerging applications as well as their wider business implications. In this paper, we use BigML to examine how the two social information channels (i.e., friends-based opinion leaders-based social information) influence consumer purchase decisions on social commerce sites. We undertake an empirical study in which we integrate a framework and a theoretical model for big data analysis. We conduct an empirical study to demonstrate that big data analytics can be successfully combined with a theoretical model to produce more robust and effective consumer purchase decisions. The results offer important and interesting insights into IS research and practice.  相似文献   

11.
The race for innovation has turned into a race for data. Rapid developments of new technologies, especially in the field of artificial intelligence, are accompanied by new ways of accessing, integrating, and analyzing sensitive personal data. Examples include financial transactions, social network activities, location traces, and medical records. As a consequence, adequate and careful privacy management has become a significant challenge. New data protection regulations, for example in the EU and China, are direct responses to these developments. Data anonymization is an important building block of data protection concepts, as it allows to reduce privacy risks by altering data. The development of anonymization tools involves significant challenges, however. For instance, the effectiveness of different anonymization techniques depends on context, and thus tools need to support a large set of methods to ensure that the usefulness of data is not overly affected by risk-reducing transformations. In spite of these requirements, existing solutions typically only support a small set of methods. In this work, we describe how we have extended an open source data anonymization tool to support almost arbitrary combinations of a wide range of techniques in a scalable manner. We then review the spectrum of methods supported and discuss their compatibility within the novel framework. The results of an extensive experimental comparison show that our approach outperforms related solutions in terms of scalability and output data quality—while supporting a much broader range of techniques. Finally, we discuss practical experiences with ARX and present remaining issues and challenges ahead.  相似文献   

12.
With many remote‐sensing instruments onboard satellites exploring the Earth's atmosphere, most data are processed to gridded daily maps. However, differences in the original spatial, temporal, and spectral resolution—as well as format, structure, and temporal and spatial coverage—make the data merging, or fusion, difficult. NASA Goddard Earth Sciences Data and Information Services Center (GES‐DISC) has archived several data products for various sensors in different formats, structures, and multi‐temporal and spatial scales for ocean, land, and atmosphere. In this investigation using Earth science data sets from multiple sources, an attempt was made to develop an optimal technique to merge the atmospheric products and provide interactive, online analysis tools for the user community. The merged/fused measurements provide a more comprehensive view of the atmosphere and improve coverage and accuracy, compared with a single instrument dataset. This paper describes ways of merging/fusing several NASA Earth Observing Systems (EOS) remote‐sensing datasets available at GES‐DISC. The applicability of various methods was investigated for merging total column ozone to implement these methods into Giovanni, the online interactive analysis tool developed by GES‐DISC. Ozone data fusion of MODerate resolution Imaging Spectrometer (MODIS) Terra and Aqua Level‐3 daily data sets was conducted, and the results were found to provide better coverage. Weighted averaging of Terra and Aqua data sets, with the consequent interpolation through the remaining gaps using Optimal Interpolation (OI), also was conducted and found to produce better results. Ozone Monitoring Instrument (OMI) total column ozone is reliable and provides better results than Atmospheric Infrared Sounder (AIRS) and MODIS. However, the agreement among these instruments is reasonable. The correlation is high (0.88) between OMI and AIRS total column ozone, while the correlation between OMI and MODIS Terra/Aqua fused total column ozone is 0.79.  相似文献   

13.
This paper analyses the impacts of information and communication technology (ICT) solutions on labor productivity, i.e. revenue per employee. Based on cross-sectional data on 1955 European firms in 2005 and a linear regression model derived from the microeconomic theory of production, the impacts of six common ICT solutions in electronic commerce (e-commerce) cannot be ignored. According to the linear regression analysis, Internet access, standardized data exchange with the trading partners, enterprise resource planning (ERP) system, and customer relationship management (CRM) system contribute significant increases in labor productivity, whereas a website on the Internet, or supply chain management (SCM) system do not result in a significant increase. Especially, Internet access has a significant effect on labor productivity, and the website on the Internet has an insignificant effect.  相似文献   

14.
Remote‐sensing change detection based on multitemporal, multispectral, and multisensor imagery has been developed over several decades and provided timely and comprehensive information for planning and decision‐making. In practice, however, it is still difficult to select a suitable change‐detection method, especially in urban areas, because of the impacts of complex factors. This paper presents a new method using multitemporal and multisensor data (SPOT‐5 and Landsat data) to detect land‐use changes in an urban environment based on principal‐component analysis (PCA) and hybrid classification methods. After geometric correction and radiometric normalization, PCA was used to enhance the change information from stacked multisensor data. Then, a hybrid classifier combining unsupervised and supervised classification was performed to identify and quantify land‐use changes. Finally, stratified random and user‐defined plots sampling methods were synthetically used to obtain total 966 reference points for accuracy assessment. Although errors and confusion exist, this method shows satisfying results with an overall accuracy to be 89.54% and 0.88 for the kappa coefficient. When compared with the post‐classification method, PCA‐based change detection also showed a better accuracy in terms of overall, producer's, and user's accuracy and kappa index. The results suggested that significant land‐use changes have occurred in Hangzhou City from 2000 to 2003, which may be related to rapid economy development and urban expansion. It is further indicated that most changes occurred in cropland areas due to urban encroachment.  相似文献   

15.
This paper proposes one method of feature selection by using Bayes' theorem. The purpose of the proposed method is to reduce the computational complexity and increase the classification accuracy of the selected feature subsets. The dependence between two attributes (binary) is determined based on the probabilities of their joint values that contribute to positive and negative classification decisions. If opposing sets of attribute values do not lead to opposing classification decisions (zero probability), then the two attributes are considered independent of each other, otherwise dependent, and one of them can be removed and thus the number of attributes is reduced. The process must be repeated on all combinations of attributes. The paper also evaluates the approach by comparing it with existing feature selection algorithms over 8 datasets from University of California, Irvine (UCI) machine learning databases. The proposed method shows better results in terms of number of selected features, classification accuracy, and running time than most existing algorithms.  相似文献   

16.
Geospatial metadata have long played an important role in the management of geospatial datasets. Often employed by institutions to organise, maintain and document their geographic resources internally, metadata may also provide a vehicle for exposing marketable data assets externally when contributed to on-line geospatial exchange initiatives. In spite of the numerous benefits it affords, obstacles to the production of such geospatial surrogates are numerous. The current work proposes an approach aimed at reducing the effort associated with geospatial metadata generation through the customisation of a proprietary Geographical Information System (GIS). By coupling data preparation, management and documentation approaches with such a bespoke application, it is intended to mitigate impediments to geospatial metadata generation whilst promoting a system of data administration that safeguards the data it supports. The current prototype, implementing an extended Dublin Core geospatial profile of 23 elements, was capable of generating a total of 20 basic metadata entries. While the findings do not suggest a dispensability of human mediation in the authoring process, they do support the view that a dataset's ambient computing infrastructure has the potential to play a significant role in automating the creation of geospatial metadata.  相似文献   

17.
This work investigates the modeling of aggregate available bandwidth in multi-sender network applications. Unlike the well-established client–server model, where there is only one server sending the requested data, the available bandwidth of multiple senders when combined together does exhibit consistent properties and thus can be modeled and estimated. Through extensive experiments conducted in the Internet this work proposed to model the aggregate available bandwidth using a normal distribution and then illustrates its application through a hybrid download-streaming algorithm and a playback-adaptive streaming algorithm for video delivery under different bandwidth availability scenarios. This new multi-source bandwidth model opens a new way to provide probabilistic performance guarantee in best-effort networks such as the Internet, and is particularly suitable for the emerging peer-to-peer applications, where having multiple sources is the norm rather than the exception.
Jack Y. B. LeeEmail:
  相似文献   

18.
In medical decision support systems, both the accuracy (i.e., the ability to adequately represent the decision making processes) as well as the transparency and interpretability (i.e., the ability to provide a domain user with compact and understandable explanation and justification of the proposed decisions) play essential roles. This paper presents an approach for automatic design of fuzzy rule-based classification systems (FRBCSs) from medical data using multi-objective evolutionary optimization algorithms (MOEOAs). Our approach generates, in a single run, a collection of solutions (medical FRBCSs) characterized by various levels of accuracy-interpretability trade-off. We propose a new complexity-related interpretability measure and we address the semantics-related interpretability issue by means of efficient implementation of the so-called strong fuzzy partitions of attribute domains. We also introduce a special-coding-free representation of the rule base and original genetic operators for its processing as well as we implement our ideas in the context of well-known and one of the presently most advanced MOEOAs, i.e., Non-dominated Sorting Genetic Algorithm II (NSGA-II). An important part of the paper is devoted to a broad comparative analysis of our approach and as many as 26 alternative techniques arranged in 32 experimental set-ups and applied to three well-known benchmark medical data sets (Breast Cancer Wisconsin (Original), Pima Indians Diabetes, and Heart Disease (Cleveland)) available from the UCI repository of machine learning databases (http://archive.ics.uci.edu/ml). A number of useful in medical applications performance measures including accuracy, sensitivity, specificity, and several interpretability measures are employed. The results of such a broad comparative analysis demonstrate that our approach significantly outperforms the alternative methods in terms of the interpretability of the obtained FRBCSs while remaining either competitive or superior in terms of their accuracy. It is worth stressing that the overwhelming majority of the existing medical classification methods concentrate almost exclusively on the accuracy issues.  相似文献   

19.
In recent years, hyperspectral and multi‐angular approaches for quantifying biophysical characteristics of vegetation have become more widely used. In fact, as both hyperspectral and multi‐angle reflectance decrease the level of noise on retrieved geophysical parameter values, they increase their reliability by also reducing the saturation problem of the relationships between vegetation indices and biophysical characteristics. To test which is the best methodology in estimating some important biophysical grassland parameters (biomass, total and percent biomass nitrogen content, phytomass and its total and percent nitrogen content), nadir and off‐nadir measurements were carried out, three times during the vegetative period of 2004, in a permanent flat meadow located in the experimental farm of the University of Padua, Italy. The two approaches and the broad band vegetation indices calculated using Landsat bands were compared considering both the best determination coefficients of five vegetation indices, calculated with the two analysis, and through a partial least squares regression using different spectral regions measured at different angles as predictive variables. Using nadir data the red edge region was the most useful for the prediction of biophysical variables, especially phytomass, but also nitrogen content. The off‐nadir data did not provide any significance differences in results to that of data obtained in nadir view but both methods seem to be better adapted to describe biophysical parameters of vegetation than the use of broad band vegetation indices.  相似文献   

20.
A phenomenon appears in a sensor network when a group of sensors continuously produces similar readings (i.e., data streams) over a period of time. This involves the processing of hundreds and maybe thousands of data streams in real-time. This paper focuses on detecting environmental phenomena and determining possible correlation between such phenomena.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号