One of the most important processes in the diagnosis of breast cancer, which is the leading mortality rate in women, is the detection of the mitosis stage at the cellular level. In literature, many studies have been proposed on the computer-aided diagnosis (CAD) system for detecting mitotic cells in breast cancer histopathological images. In this study, comparative evaluation of conventional and deep learning based feature extraction methods for automatic detection of mitosis in histopathological images are focused. While various handcrafted features are extracted with textural/spatial, statistical and shape-based methods in conventional approach, the convolutional neural network structure proposed on the deep learning approach aims to create an architecture that extracts the features of small cellular structures such as mitotic cells. Mitosis detection/counting is an important process that helps us assess how aggressive or malignant the cancer’s spread is. In the proposed study, approximately 180,000 non-mitotic and 748 mitotic cells are extracted for the evaluations. It is obvious that the classification stage cannot be performed properly due to the imbalanced numbers of mitotic and non-mitotic cells extracted from histopathological images. Hence, the random under-sampling boosting (RUSBoost) method is exploited to overcome this problem. The proposed framework is tested on mitosis detection in breast cancer histopathological images dataset provided from the International Conference on Pattern Recognition (ICPR) 2014 contest. In the results obtained with the deep learning approach, 79.42% recall, 96.78% precision and 86.97% F-measure values are achieved more successfully than handcrafted methods. A client/server-based framework has also been developed as a secondary decision support system for use by pathologists in hospitals. Thus, it is aimed that pathologists will be able to detect mitotic cells in various histopathological images more easily through necessary interfaces.
相似文献The accuracy and performance of deep neural network models become important issues as the applications of deep learning increase. For example, the navigation system of autonomous self-driving vehicles requires very accurate deep learning models. If a self-driving car fails to detect a pedestrian in bad weather, the result can be devastating. If we can increase the model accuracy by increasing the training data, the probability of avoiding such scenarios increases significantly. However, the problem of privacy for consumers and lack of enthusiasm for sharing their personal data, e.g., the recordings of their self-driving car, is an obstacle for using this valuable data. In Blockchain technology, many entities which cannot trust each other in normal conditions can join together to achieve a mutual goal. In this paper, a secure decentralized peer-to-peer framework for training the deep neural network models based on the distributed ledger technology in Blockchain ecosystem is proposed. The proposed framework anonymizes the identity of data providers and therefore can be used as an incentive for consumers to share their private data for training deep learning models. The proposed framework uses the Stellar Blockchain infrastructure for secure decentralized training of the deep models. A deep learning coin is proposed for Blockchain compensation.
相似文献These days one of the major causes of partial or complete blindness that has affected a majority of people all around the world is glaucoma. Glaucoma is caused as a result of increased fluid pressure inside the optic nerves called intra ocular pressure. A real time cloud-based framework for screening the glaucoma suspect’s retinal fundus images as received by the people on the public cloud, is proposed in this paper. In the proposed framework the existence of glaucoma and analysis of the retinal fundus images is achieved by deep learning technique and convolutional neural network respectively. EfficientNet and UNet++ models are used to identify the presence of glaucoma. On comparing our framework to various state-of-the-art models and quantitative assessment are performing on various benchmark datasets like RIM-ONE and DRISHTI-GS1, it was found that the proposed framework is scalable, location independent, and easily accessible to one and all due to the cloud platform.
相似文献Shape alignment or estimation under occlusion is one of the most challenging tasks in computer vision field. Most previous works treat occlusion as noises or part models, which usually lead to low accuracy or inefficiencies. This paper proposes an efficient and accurate regression-based algorithm for face alignment. In this framework, local and global regressions are iteratively used to train a series of random forests in a cascaded manner. In training and testing process, each step consists of two layers. In the first layer, a set of highly discriminative local features are extracted from local regions according to locality principle. The regression forests are trained for each facial landmark independently using those local features. Then the leaf node of the regression tree is encoded by histogram statistic method and the final shape is estimated by a linear regression matrix. In the second layer, our proposed global features are generated. Then we use those features to train a random fern to keep the global shape constraints. Experiments show that our method has a high speed, but same or slightly lower accuracy than state of the art methods under occlusion condition. In order to gain a higher accuracy we use multi-random shape for initialization, which may slightly reduce the calculation efficiency as a trade-off.
相似文献With the development of AR and VR, depth images are widely used for facial expression analysis and recognition. To reduce the storage size and save bandwidth, an efficient compression framework is desired. In this paper, we propose a novel lossless compression framework for facial depth images in expression recognition. In the proposed framework, two steps are designed to remove the redundancy in the facial depth images, which are data preparing and bitstream encoding operations. In the data preparing operation, the original image is represented by the same and different parts between the left and right sides. In the bitstream encoding operation, these parts are compressed to get the final bitstream. The proposed framework is implemented and examined on the BU-3DFE Database. Experimental result shows that the proposed technique outperforms existing lossless compression frameworks in terms of compression efficiency, and the average data size is reduced to 25.27% by the proposed framework.
相似文献Statistical relational learning (SRL) and graph neural networks (GNNs) are two powerful approaches for learning and inference over graphs. Typically, they are evaluated in terms of simple metrics such as accuracy over individual node labels. Complex aggregate graph queries (AGQ) involving multiple nodes, edges, and labels are common in the graph mining community and are used to estimate important network properties such as social cohesion and influence. While graph mining algorithms support AGQs, they typically do not take into account uncertainty, or when they do, make simplifying assumptions and do not build full probabilistic models. In this paper, we examine the performance of SRL and GNNs on AGQs over graphs with partially observed node labels. We show that, not surprisingly, inferring the unobserved node labels as a first step and then evaluating the queries on the fully observed graph can lead to sub-optimal estimates, and that a better approach is to compute these queries as an expectation under the joint distribution. We propose a sampling framework to tractably compute the expected values of AGQs. Motivated by the analysis of subgroup cohesion in social networks, we propose a suite of AGQs that estimate the community structure in graphs. In our empirical evaluation, we show that by estimating these queries as an expectation, SRL-based approaches yield up to a 50-fold reduction in average error when compared to existing GNN-based approaches.
相似文献Viral infection causes a wide variety of human diseases including cancer and COVID-19. Viruses invade host cells and associate with host molecules, potentially disrupting the normal function of hosts that leads to fatal diseases. Novel viral genome prediction is crucial for understanding the complex viral diseases like AIDS and Ebola. While most existing computational techniques classify viral genomes, the efficiency of the classification depends solely on the structural features extracted. The state-of-the-art DNN models achieved excellent performance by automatic extraction of classification features, but the degree of model explainability is relatively poor. During model training for viral prediction, proposed CNN, CNN-LSTM based methods (EdeepVPP, EdeepVPP-hybrid) automatically extracts features. EdeepVPP also performs model interpretability in order to extract the most important patterns that cause viral genomes through learned filters. It is an interpretable CNN model that extracts vital biologically relevant patterns (features) from feature maps of viral sequences. The EdeepVPP-hybrid predictor outperforms all the existing methods by achieving 0.992 mean AUC-ROC and 0.990 AUC-PR on 19 human metagenomic contig experiment datasets using 10-fold cross-validation. We evaluate the ability of CNN filters to detect patterns across high average activation values. To further asses the robustness of EdeepVPP model, we perform leave-one-experiment-out cross-validation. It can work as a recommendation system to further analyze the raw sequences labeled as ‘unknown’ by alignment-based methods. We show that our interpretable model can extract patterns that are considered to be the most important features for predicting virus sequences through learned filters.
相似文献The broadcast storm problem causes redundancy, contention and collision of messages in a network, particularly in vehicular ad hoc networks (VANETs) where number of participants can grow arbitrarily. This paper presents a solution to this problem in which a node is designated as a master through an election process. Moreover, an algorithm is proposed for asynchronous VANETs to select a master node, where the participants (i.e., vehicles) can communicate with each other directly (single-hop). The proposed algorithm is extrema-finding in a way that a node having maximum signal strength is elected as a master node and each vehicle continues communication with the master until the master node keeps its signal strength at the highest level and remains operational too. This paper further presents the Petri net-based modeling of the proposed algorithm for evaluation which is going to be presented for the first time in leader election algorithm in VANETs. Verification of the proposed algorithm is carried out through state space analysis technique.
相似文献Pruning is an effective technique in improving the generalization performance of decision tree. However, most of the existing methods are time-consuming or unsuitable for small dataset. In this paper, a new pruning algorithm based on structural risk of leaf node is proposed. The structural risk is measured by the product of the accuracy and the volume (PAV) in leaf node. The comparison experiments with Cost-Complexity Pruning using cross-validation (CCP-CV) algorithm on some benchmark datasets show that PAV pruning largely reduces the time cost of CCP-CV, while the test accuracy of PAV pruning is close to that of CCP-CV.
相似文献In this paper, an asymmetric hybrid cryptosystem utilizing four-dimensional (4D) hyperchaotic framework by means of coherent superposition and random decomposition in hybrid multi-resolution wavelet domain is put forward. The 4D hyperchaotic framework is utilized for creating permutation keystream for a pixel swapping procedure. The hybrid multi-resolution wavelet is formed by combining Walsh transform and fractional Fourier transform of various orders. The 4D hyperchaotic framework’s parameters and preliminary conditions alongside the fractional orders extend the key-space and consequently give additional strength to the proposed cryptosystem. The proposed cryptosystem has an extended key-space to avoid any brute-force attack and is nonlinear in nature. The scheme is validated on greyscale images. Computer-based simulations have been executed to validate the robustness of the proposed scheme against different types of attacks. Results demonstrate that the proposed cryptosystem along with offering higher protection against noise and occlusion attacks is also unassailable to special attack.
相似文献Internet of things is the backbone of the smart applications, which attracts many types of research on the state-of-the-art network applications. Enormous research on sensor networks left more devices that are sensible in the day-to-day life. Hence, implementing new sensor networks for smart applications is not necessary. Many researchers have accepted and utilized existing networks for their request. In this case, techniques for identifying and registering existing sensible things are on demand. This paper proposed a hybrid framework for sensor identification and registration (HSIR) for new IoT applications. This research proposing HSIR as a framework aimed for user-friendliness in the IoT as well as addressed toward the scalability requirement of IoT applications. This model uses content- and context-based multicast communication instead of broadcast to reduce energy and time consumption in sensor identification. HSIR also proposed a public key to register the new network for application requirements. The behaviour of the proposed model has been assayed in realistic with simulations and proved by comparing other models.
相似文献Manual analysis of the indirect-immunofluorescence (IIF) human epithelial cell Type-2 (HEp-2) cell image for the diagnosis of an auto-immune disease is a subjective and time-consuming process, and it is also prone to human-errors. The present work proposes an automatic capsule neural network (CapsNet) based framework for HEp-2 cell image classification to compensate for the deficiencies present in the prominent convolution neural network (CNN) based frameworks. In CNNs, the spatial relationship between the features present in the anti-nuclear antibodies (ANA) patterns, found in the IIF HEp-2 cell image (ANA-IIF image) is lost which increases the chance of detection of false-positives. In the proposed CapsNet based model, the max-pooling layer has been replaced with advanced dynamic routing algorithm and scalar outputs are replaced with the vector output, thus the richer representation of the same feature without the loss of spatial relationship with respect to the other features are made possible. The proposed framework recognizes ANA-IIF images with an average accuracy of 95.00% for 10-fold cross-validations. The experimental result also shows that the proposed model performs better than the other CNN based classification models for human epithelial cell image classification task.
相似文献提出一种基于自回归求和移动平均(ARIMA) 与人工神经网络(ANN) 的区间时间序列混合模型, 并用混合模型分别对区间中值序列和区间半径序列建模. 采用Monte Carlo 方法生成模拟区间序列, 分别用ARIMA、ANN和混合模型3 种方法进行建模和预测实验, 并用统计学方法检验模型误差. 最后分别采用3 种方法对H市轨道交通某号线牵引能耗区间序列进行了建模和预测, 实验结果表明混合模型的建模精度和预测性能均优于单一模型.
相似文献