首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 37 毫秒
1.
面向对象图形数据库技术的实现   总被引:1,自引:0,他引:1  
结合实例介绍了一种基于图形的面向对象数据库管理系统OODBMS的设计。同时介绍了OODMBS的结构和功能以及用DELPHI语言进行实现。  相似文献   

2.
    
Supply Chain Finance (SCF) is important for improving the effectiveness of supply chain capital operations and reducing the overall management cost of a supply chain. In recent years, with the deep integration of supply chain and Internet, Big Data, Artificial Intelligence, Internet of Things, Blockchain, etc., the efficiency of supply chain financial services can be greatly promoted through building more customized risk pricing models and conducting more rigorous investment decision-making processes. However, with the rapid development of new technologies, the SCF data has been massively increased and new financial fraud behaviors or patterns are becoming more covertly scattered among normal ones. The lack of enough capability to handle the big data volumes and mitigate the financial frauds may lead to huge losses in supply chains. In this article, a distributed approach of big data mining is proposed for financial fraud detection in a supply chain, which implements the distributed deep learning model of Convolutional Neural Network (CNN) on big data infrastructure of Apache Spark and Hadoop to speed up the processing of the large dataset in parallel and reduce the processing time significantly. By training and testing on the continually updated SCF dataset, the approach can intelligently and automatically classify the massive data samples and discover the fraudulent financing behaviors, so as to enhance the financial fraud detection with high precision and recall rates, and reduce the losses of frauds in a supply chain.  相似文献   

3.
    
Selecting the most appropriate heuristic for solving a specific problem is not easy, for many reasons. This article focuses on one of these reasons: traditionally, the solution search process has operated in a given manner regardless of the specific problem being solved, and the process has been the same regardless of the size, complexity and domain of the problem. To cope with this situation, search processes should mould the search into areas of the search space that are meaningful for the problem. This article builds on previous work in the development of a multi-agent paradigm using techniques derived from knowledge discovery (data-mining techniques) on databases of so-far visited solutions. The aim is to improve the search mechanisms, increase computational efficiency and use rules to enrich the formulation of optimization problems, while reducing the search space and catering to realistic problems.  相似文献   

4.
蒋驷驹  卢章平  李明珠 《包装工程》2021,42(22):337-346
目的 在大数据环境下,运用大数据技术提取赛珍珠文化元素,探究大数据挖掘理念在文创产品设计中应用的可行性.方法 首先,采集赛珍珠相关数据资料,借助网络爬虫工具采集网络媒体中赛珍珠相关的文本信息,同时人工搜集赛珍珠相关学术研究以及社会访谈资料,然后将数据保存为可编辑的文本形式.其次,运用中文分词工具对采集的文本信息进行处理,将语言字符串切分成词语,滤除中文停用词、低频词及干扰词,形成精炼的赛珍珠数据集合.之后,采用LDA主题模型算法对数据集合进行降维、聚类,形成初步的主题模型,然后经过人工筛选构建赛珍珠文化元素主题模型.最后,根据文化元素主题模型内容,选择赛珍珠文化元素进行赛珍珠文创产品设计实践.结论 依照大数据挖掘理念,通过对网络爬虫技术、中文分词工具以及LDA主题模型算法等大数据处理工具的综合应用,能够科学高效地从庞大的社会网络媒体中提炼赛珍珠文化元素,从而达到促进整个文创产品设计流程的效果.  相似文献   

5.
    
Information hiding tends to hide secret information in image area where is rich texture or high frequency, so as to transmit secret information to the recipient without affecting the visual quality of the image and arousing suspicion. We take advantage of the complexity of the object texture and consider that under certain circumstances, the object texture is more complex than the background of the image, so the foreground object is more suitable for steganography than the background. On the basis of instance segmentation, such as Mask R-CNN, the proposed method hides secret information into each object's region by using the masks of instance segmentation, thus realizing the information hiding of the foreground object without background. This method not only makes it more efficient for the receiver to extract information, but also proves to be more secure and robust by experiments.  相似文献   

6.
随着数字影像技术的迅猛发展,数字影像数据得到快速累积,数字影像管理技术越来越受到重视;本文针对当前数字影像管理技术面临的问题,系统研究了对象-关系数据库大对象(LOB)流式处理方法在数字影像管理技术上的应用,并结合实例,介绍了具体实现方法。  相似文献   

7.
随着数字影像技术的迅猛发展,数字影像数据得到快速累积,数字影像管理技术越来越受到重视;本文针对当前数字影像管理技术面临的问题,系统研究了对象一关系数据库大对象(LOB)流式处理方法在数字影像管理技术上的应用,并结合实例,介绍了具体实现方法。  相似文献   

8.
    
Since the web service is essential in daily lives, cyber security becomes moreand more important in this digital world. Malicious Uniform Resource Locator (URL) isa common and serious threat to cybersecurity. It hosts unsolicited content and lureunsuspecting users to become victim of scams, such as theft of private information,monetary loss, and malware installation. Thus, it is imperative to detect such threats.However, traditional approaches for malicious URLs detection that based on theblacklists are easy to be bypassed and lack the ability to detect newly generated maliciousURLs. In this paper, we propose a novel malicious URL detection method based on deeplearning model to protect against web attacks. Specifically, we firstly use auto-encoder torepresent URLs. Then, the represented URLs will be input into a proposed compositeneural network for detection. In order to evaluate the proposed system, we madeextensive experiments on HTTP CSIC2010 dataset and a dataset we collected, and theexperimental results show the effectiveness of the proposed approach.  相似文献   

9.
    
In recent years, the number of exposed vulnerabilities has grown rapidly andmore and more attacks occurred to intrude on the target computers using these vulnerabilities such as different malware. Malware detection has attracted more attention and still faces severe challenges. As malware detection based traditional machine learning relies on exports’ experience to design efficient features to distinguish different malware, it causes bottleneck on feature engineer and is also time-consuming to find efficient features. Due to its promising ability in automatically proposing and selecting significant features, deep learning has gradually become a research hotspot. In this paper, aiming to detect the malicious payload and identify their categories with high accuracy, we proposed a packet-based malicious payload detection and identification algorithm based on object detection deep learning network. A dataset of malicious payload on code execution vulnerability has been constructed under the Metasploit framework and used to evaluate the performance of the proposed malware detection and identification algorithm. The experimental results demonstrated that the proposed object detection network can efficiently find and identify malicious payloads with high accuracy.  相似文献   

10.
    
The translation quality of neural machine translation (NMT) systems depends largely on the quality of large-scale bilingual parallel corpora available. Research shows that under the condition of limited resources, the performance of NMT is greatly reduced, and a large amount of high-quality bilingual parallel data is needed to train a competitive translation model. However, not all languages have large-scale and high-quality bilingual corpus resources available. In these cases, improving the quality of the corpora has become the main focus to increase the accuracy of the NMT results. This paper proposes a new method to improve the quality of data by using data cleaning, data expansion, and other measures to expand the data at the word and sentence-level, thus improving the richness of the bilingual data. The long short-term memory (LSTM) language model is also used to ensure the smoothness of sentence construction in the process of sentence construction. At the same time, it uses a variety of processing methods to improve the quality of the bilingual data. Experiments using three standard test sets are conducted to validate the proposed method; the most advanced fairseq-transformer NMT system is used in the training. The results show that the proposed method has worked well on improving the translation results. Compared with the state-of-the-art methods, the BLEU value of our method is increased by 2.34 compared with that of the baseline.  相似文献   

11.
    
Process object is the instance of process. Vertexes and edges are in the graph of process object. There are different types of the object itself and the associations between object. For the large-scale data, there are many changes reflected. Recently, how to find appropriate real-time data for process object becomes a hot research topic. Data sampling is a kind of finding c hanges o f p rocess o bjects. There i s r equirements f or s ampling to be adaptive to underlying distribution of data stream. In this paper, we have proposed a adaptive data sampling mechanism to find a ppropriate d ata t o m odeling. F irst o f all, we use concept drift to make the partition of the life cycle of process object. Then, entity community detection is proposed to find changes. Finally, we propose stream-based real-time optimization of data sampling. Contributions of this paper are concept drift, community detection, and stream-based real-time computing. Experiments show the effectiveness and feasibility of our proposed adaptive data sampling mechanism for process object.  相似文献   

12.
A standard optimization principle is used with nonclassical target functions concerned with the generalized work in a method of synthesizing a quasioptimal monitoring and control system for a technical object in the case when there is a lack of adequate mathematical model for the object’s behavior. Translated from Metrologiya, No. 2, pp. 3–21, February, 2009.  相似文献   

13.
    
《Quality Engineering》2012,24(4):477-487
ABSTRACT

Information technology increases our ability to use data to develop and improve processes. Professionals are being asked to make sense out of large volumes of data. Today's literature provides little guidance on how to approach such problems. Addressing this void, this article places a keen focus on the pedigree of the data: process that generated the data, measurement, and data collection process including sampling schemes used. The importance of using subject matter knowledge and recognition of the sequential nature of problem solving is also emphasized. A guiding framework for the execution of data-rich projects is presented and illustrated with case studies.  相似文献   

14.
    
The rapid development and progress in deep machine-learning techniques have become a key factor in solving the future challenges of humanity. Vision-based target detection and object classification have been improved due to the development of deep learning algorithms. Data fusion in autonomous driving is a fact and a prerequisite task of data preprocessing from multi-sensors that provide a precise, well-engineered, and complete detection of objects, scene or events. The target of the current study is to develop an in-vehicle information system to prevent or at least mitigate traffic issues related to parking detection and traffic congestion detection. In this study we examined to solve these problems described by (1) extracting region-of-interest in the images (2) vehicle detection based on instance segmentation, and (3) building deep learning model based on the key features obtained from input parking images. We build a deep machine learning algorithm that enables collecting real video-camera feeds from vision sensors and predicting free parking spaces. Image augmentation techniques were performed using edge detection, cropping, refined by rotating, thresholding, resizing, or color augment to predict the region of bounding boxes. A deep convolutional neural network F-MTCNN model is proposed that simultaneously capable for compiling, training, validating and testing on parking video frames through video-camera. The results of proposed model employing on publicly available PK-Lot parking dataset and the optimized model achieved a relatively higher accuracy 97.6% than previous reported methodologies. Moreover, this article presents mathematical and simulation results using state-of-the-art deep learning technologies for smart parking space detection. The results are verified using Python, TensorFlow, OpenCV computer simulation frameworks.  相似文献   

15.
面向中药复杂体系的陶瓷膜污染机理研究思路与方法   总被引:3,自引:1,他引:3  
针对制约陶瓷膜精制中药技术的膜污染关键问题,依据现代分离科学基本原理,建立可科学表征中药水提液复杂体系的物理化学特征的标准技术规范,以具有代表性的中药及其复方为实验体系,采集膜过程相关参数,建立膜污染基础数据库,综合运用模式识别、人工智能、支持向量机等方法进行数据挖掘、知识发现.应用中药制剂学、物理化学、计算机化学、化学工程学,跨学科交叉研究中药水提液膜污染的规律,为类似复杂体系的膜污染机理与防治提供了一种全新的研究模式.用测试集进行预测,6种中药水提液膜污染度实际值、拟合值的均方误差为0.6%,所得模型拟合精度高,可满足拟合要求.  相似文献   

16.
    
Text visualization is concerned with the representation of text in a graphical form to facilitate comprehension of large textual data. Its aim is to improve the ability to understand and utilize the wealth of text-based information available. An essential task in any scientific research is the study and review of previous works in the specified domain, a process that is referred to as the literature survey process. This process involves the identification of prior work and evaluating its relevance to the research question. With the enormous number of published studies available online in digital form, this becomes a cumbersome task for the researcher. This paper presents the design and implementation of a tool that aims to facilitate this process by identifying relevant work and suggesting clusters of articles by conceptual modeling, thus providing different options that enable the researcher to visualize a large number of articles in a graphical easy-to-analyze form. The tool helps the researcher in analyzing and synthesizing the literature and building a conceptual understanding of the designated research area. The evaluation of the tool shows that researchers have found it useful and that it supported the process of relevant work analysis given a specific research question, and 70% of the evaluators of the tool found it very useful.  相似文献   

17.
崔贯勋  纪钢 《包装工程》2011,32(13):45-47,56
从个性化包装产生的原因出发,分析了个性化包装的特点及个性化包装设计的重要性,阐述了基于关联规则挖掘的个性化包装设计的基本方法,详细说明了关联规则挖掘在个性化包装中应用的关键技术,提出了一种基于关联规则挖掘的个性化包装设计模型,并给出了其关键步骤的实现方法。  相似文献   

18.
    
Classification of skin lesions is a complex identification challenge. Due to the wide variety of skin lesions, doctors need to spend a lot of time and effort to judge the lesion image which zoomed through the dermatoscopy. The diagnosis which the algorithm of identifying pathological images assists doctors gets more and more attention. With the development of deep learning, the field of image recognition has made longterm progress. The effect of recognizing images through convolutional neural network models is better than traditional image recognition technology. In this work, we try to classify seven kinds of lesion images by various models and methods of deep learning, common models of convolutional neural network in the field of image classification include ResNet, DenseNet and SENet, etc. We use a fine-tuning model with a multi-layer perceptron, by training the skin lesion model, in the validation set and test set we use data expansion based on multiple cropping, and use five models’ ensemble as the final results. The experimental results show that the program has good results in improving the sensitivity of skin lesion diagnosis.  相似文献   

19.
    
Data fusion is one of the challenging issues, the healthcare sector is facing in the recent years. Proper diagnosis from digital imagery and treatment are deemed to be the right solution. Intracerebral Haemorrhage (ICH), a condition characterized by injury of blood vessels in brain tissues, is one of the important reasons for stroke. Images generated by X-rays and Computed Tomography (CT) are widely used for estimating the size and location of hemorrhages. Radiologists use manual planimetry, a time-consuming process for segmenting CT scan images. Deep Learning (DL) is the most preferred method to increase the efficiency of diagnosing ICH. In this paper, the researcher presents a unique multi-modal data fusion-based feature extraction technique with Deep Learning (DL) model, abbreviated as FFE-DL for Intracranial Haemorrhage Detection and Classification, also known as FFEDL-ICH. The proposed FFEDL-ICH model has four stages namely, preprocessing, image segmentation, feature extraction, and classification. The input image is first preprocessed using the Gaussian Filtering (GF) technique to remove noise. Secondly, the Density-based Fuzzy C-Means (DFCM) algorithm is used to segment the images. Furthermore, the Fusion-based Feature Extraction model is implemented with handcrafted feature (Local Binary Patterns) and deep features (Residual Network-152) to extract useful features. Finally, Deep Neural Network (DNN) is implemented as a classification technique to differentiate multiple classes of ICH. The researchers, in the current study, used benchmark Intracranial Haemorrhage dataset and simulated the FFEDL-ICH model to assess its diagnostic performance. The findings of the study revealed that the proposed FFEDL-ICH model has the ability to outperform existing models as there is a significant improvement in its performance. For future researches, the researcher recommends the performance improvement of FFEDL-ICH model using learning rate scheduling techniques for DNN.  相似文献   

20.
    
With the development of science and technology, the status of the water environment has received more and more attention. In this paper, we propose a deep learning model, named a Joint Auto-Encoder network, to solve the problem of outlier detection in water supply data. The Joint Auto-Encoder network first expands the size of training data and extracts the useful features from the input data, and then reconstructs the input data effectively into an output. The outliers are detected based on the network’s reconstruction errors, with a larger reconstruction error indicating a higher rate to be an outlier. For water supply data, there are mainly two types of outliers: outliers with large values and those with values closed to zero. We set two separate thresholds, τ1 and τ2, for the reconstruction errors to detect the two types of outliers respectively. The data samples with reconstruction errors exceeding the thresholds are voted to be outliers. Thetwo thresholds can be calculated by the classification confusion matrix and the receiver operating characteristic (ROC) curve. We have also performed comparisons between the Joint Auto-Encoder and the vanilla Auto-Encoder in this paper on both the synthesis data set and the MNIST data set. As a result, our model has proved to outperform the vanilla Auto-Encoder and some other outlier detection approaches with the recall rate of 98.94 percent in water supply data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号

京公网安备 11010802026262号