首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 484 毫秒
1.

One of the most important processes in the diagnosis of breast cancer, which is the leading mortality rate in women, is the detection of the mitosis stage at the cellular level. In literature, many studies have been proposed on the computer-aided diagnosis (CAD) system for detecting mitotic cells in breast cancer histopathological images. In this study, comparative evaluation of conventional and deep learning based feature extraction methods for automatic detection of mitosis in histopathological images are focused. While various handcrafted features are extracted with textural/spatial, statistical and shape-based methods in conventional approach, the convolutional neural network structure proposed on the deep learning approach aims to create an architecture that extracts the features of small cellular structures such as mitotic cells. Mitosis detection/counting is an important process that helps us assess how aggressive or malignant the cancer’s spread is. In the proposed study, approximately 180,000 non-mitotic and 748 mitotic cells are extracted for the evaluations. It is obvious that the classification stage cannot be performed properly due to the imbalanced numbers of mitotic and non-mitotic cells extracted from histopathological images. Hence, the random under-sampling boosting (RUSBoost) method is exploited to overcome this problem. The proposed framework is tested on mitosis detection in breast cancer histopathological images dataset provided from the International Conference on Pattern Recognition (ICPR) 2014 contest. In the results obtained with the deep learning approach, 79.42% recall, 96.78% precision and 86.97% F-measure values are achieved more successfully than handcrafted methods. A client/server-based framework has also been developed as a secondary decision support system for use by pathologists in hospitals. Thus, it is aimed that pathologists will be able to detect mitotic cells in various histopathological images more easily through necessary interfaces.

  相似文献   

2.

The accuracy and performance of deep neural network models become important issues as the applications of deep learning increase. For example, the navigation system of autonomous self-driving vehicles requires very accurate deep learning models. If a self-driving car fails to detect a pedestrian in bad weather, the result can be devastating. If we can increase the model accuracy by increasing the training data, the probability of avoiding such scenarios increases significantly. However, the problem of privacy for consumers and lack of enthusiasm for sharing their personal data, e.g., the recordings of their self-driving car, is an obstacle for using this valuable data. In Blockchain technology, many entities which cannot trust each other in normal conditions can join together to achieve a mutual goal. In this paper, a secure decentralized peer-to-peer framework for training the deep neural network models based on the distributed ledger technology in Blockchain ecosystem is proposed. The proposed framework anonymizes the identity of data providers and therefore can be used as an incentive for consumers to share their private data for training deep learning models. The proposed framework uses the Stellar Blockchain infrastructure for secure decentralized training of the deep models. A deep learning coin is proposed for Blockchain compensation.

  相似文献   

3.

These days one of the major causes of partial or complete blindness that has affected a majority of people all around the world is glaucoma. Glaucoma is caused as a result of increased fluid pressure inside the optic nerves called intra ocular pressure. A real time cloud-based framework for screening the glaucoma suspect’s retinal fundus images as received by the people on the public cloud, is proposed in this paper. In the proposed framework the existence of glaucoma and analysis of the retinal fundus images is achieved by deep learning technique and convolutional neural network respectively. EfficientNet and UNet++ models are used to identify the presence of glaucoma. On comparing our framework to various state-of-the-art models and quantitative assessment are performing on various benchmark datasets like RIM-ONE and DRISHTI-GS1, it was found that the proposed framework is scalable, location independent, and easily accessible to one and all due to the cloud platform.

  相似文献   

4.
An evaluation of the diagnostic accuracy of Pathfinder.   总被引:4,自引:0,他引:4  
We present an evaluation of the diagnostic accuracy of Pathfinder, an expert system that assists pathologists with the diagnosis of lymph node diseases. We evaluate two versions of the system using both informal and decision-theoretic metrics of performance. In one version of Pathfinder, we assume incorrectly that all observations are conditionally independent. In the other version, we use a belief network to represent accurately the probabilistic dependencies among the observations. In both versions, we make the assumption--reasonable for this domain--that diseases are mutually exclusive and exhaustive. The results of the study show that (1) it is cost effective to represent probabilistic dependencies among observations in the lymph node domain, and (2) the diagnostic accuracy of the more complex version of Pathfinder is at least as good as that of the Pathfinder expert. In addition, the study illustrates how informal and decision-theoretic metrics for performance complement one another.  相似文献   

5.
In follow-up clinical studies, the main time end-point is the failure from a specific starting point (e.g. treatment, surgery). A deeper investigation concerns the causes of failure. Statistical analysis typically focuses on the study of the cause specific hazard functions of possibly censored survival data. In the framework of discrete time models and competing risks, a multilayer perceptron was already proposed as an extension of generalized linear models with multinomial errors using a non-linear predictor (PLANNCR). According to standard practice, weight-decay was adopted to modulate model complexity. A Genetic Algorithm is considered for the complexity control of PLANNCR allowing to regularize independently each parameter of the model. The ICOMP information criterion is used as fitness function. To demonstrate the criticality and the benefits of the technique an application to a case series of 1793 women with primary breast cancer without axillary lymph node involvement is presented.  相似文献   

6.
Guo  Songrui  Tan  Guanghua  Pan  Huawei  Chen  Lin  Gao  Chunming 《Multimedia Tools and Applications》2017,76(6):8677-8694

Shape alignment or estimation under occlusion is one of the most challenging tasks in computer vision field. Most previous works treat occlusion as noises or part models, which usually lead to low accuracy or inefficiencies. This paper proposes an efficient and accurate regression-based algorithm for face alignment. In this framework, local and global regressions are iteratively used to train a series of random forests in a cascaded manner. In training and testing process, each step consists of two layers. In the first layer, a set of highly discriminative local features are extracted from local regions according to locality principle. The regression forests are trained for each facial landmark independently using those local features. Then the leaf node of the regression tree is encoded by histogram statistic method and the final shape is estimated by a linear regression matrix. In the second layer, our proposed global features are generated. Then we use those features to train a random fern to keep the global shape constraints. Experiments show that our method has a high speed, but same or slightly lower accuracy than state of the art methods under occlusion condition. In order to gain a higher accuracy we use multi-random shape for initialization, which may slightly reduce the calculation efficiency as a trade-off.

  相似文献   

7.
罗键  武鹤 《控制与决策》2016,31(4):635-639

在充满竞争的环境中, 资源有限导致智能体之间存在利益冲突, 有必要建立对手模型并对其行为进行准确预测, 从而制定对自身有利的策略. 利用交互式动态影响图对未知对手进行建模, 将对手的候选模型保存在模型节点并随时间更新其信度. 结合观测到的对手动作, 在模型空间中利用“观察-动作”序列逐步排除候选模型, 最终判定对手的真实模型. 实验结果表明, 所提出的算法取得了很好的效果, 验证了该算法的实用性.

  相似文献   

8.
A novel pairwise decision tree (PDT) framework is proposed for hyperspectral classification, where no partitions and clustering are needed and the original C‐class problem is divided into a set of two‐class problems. The top of the tree includes all original classes. Each internal node consists of either a set of class pairs or a set of class pairs and a single class. The pairs are selected by the proposed sequential forward selection (SFS) or sequential backward selection (SBS) algorithms. The current node is divided into next‐stage nodes by excluding either class of each selected pair. In the classification, an unlabelled pixel is recursively classified into the next node, by excluding the less similar class of each node pair until the classification result is obtained. Compared to the single‐stage classifier approach, the pairwise classifier framework and the binary hierarchical classifier (BHC), experiments on an Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data set for a nine‐class problem demonstrated the effectiveness of the proposed framework.  相似文献   

9.
Fan  Chunxiao  Li  Fu  Jiao  Yang  Liu  Xueliang 《Multimedia Tools and Applications》2021,80(16):24173-24183

With the development of AR and VR, depth images are widely used for facial expression analysis and recognition. To reduce the storage size and save bandwidth, an efficient compression framework is desired. In this paper, we propose a novel lossless compression framework for facial depth images in expression recognition. In the proposed framework, two steps are designed to remove the redundancy in the facial depth images, which are data preparing and bitstream encoding operations. In the data preparing operation, the original image is represented by the same and different parts between the left and right sides. In the bitstream encoding operation, these parts are compressed to get the final bitstream. The proposed framework is implemented and examined on the BU-3DFE Database. Experimental result shows that the proposed technique outperforms existing lossless compression frameworks in terms of compression efficiency, and the average data size is reduced to 25.27% by the proposed framework.

  相似文献   

10.
Embar  Varun  Srinivasan  Sriram  Getoor  Lise 《Machine Learning》2021,110(7):1847-1866

Statistical relational learning (SRL) and graph neural networks (GNNs) are two powerful approaches for learning and inference over graphs. Typically, they are evaluated in terms of simple metrics such as accuracy over individual node labels. Complex aggregate graph queries (AGQ) involving multiple nodes, edges, and labels are common in the graph mining community and are used to estimate important network properties such as social cohesion and influence. While graph mining algorithms support AGQs, they typically do not take into account uncertainty, or when they do, make simplifying assumptions and do not build full probabilistic models. In this paper, we examine the performance of SRL and GNNs on AGQs over graphs with partially observed node labels. We show that, not surprisingly, inferring the unobserved node labels as a first step and then evaluating the queries on the fully observed graph can lead to sub-optimal estimates, and that a better approach is to compute these queries as an expectation under the joint distribution. We propose a sampling framework to tractably compute the expected values of AGQs. Motivated by the analysis of subgroup cohesion in social networks, we propose a suite of AGQs that estimate the community structure in graphs. In our empirical evaluation, we show that by estimating these queries as an expectation, SRL-based approaches yield up to a 50-fold reduction in average error when compared to existing GNN-based approaches.

  相似文献   

11.

Viral infection causes a wide variety of human diseases including cancer and COVID-19. Viruses invade host cells and associate with host molecules, potentially disrupting the normal function of hosts that leads to fatal diseases. Novel viral genome prediction is crucial for understanding the complex viral diseases like AIDS and Ebola. While most existing computational techniques classify viral genomes, the efficiency of the classification depends solely on the structural features extracted. The state-of-the-art DNN models achieved excellent performance by automatic extraction of classification features, but the degree of model explainability is relatively poor. During model training for viral prediction, proposed CNN, CNN-LSTM based methods (EdeepVPP, EdeepVPP-hybrid) automatically extracts features. EdeepVPP also performs model interpretability in order to extract the most important patterns that cause viral genomes through learned filters. It is an interpretable CNN model that extracts vital biologically relevant patterns (features) from feature maps of viral sequences. The EdeepVPP-hybrid predictor outperforms all the existing methods by achieving 0.992 mean AUC-ROC and 0.990 AUC-PR on 19 human metagenomic contig experiment datasets using 10-fold cross-validation. We evaluate the ability of CNN filters to detect patterns across high average activation values. To further asses the robustness of EdeepVPP model, we perform leave-one-experiment-out cross-validation. It can work as a recommendation system to further analyze the raw sequences labeled as ‘unknown’ by alignment-based methods. We show that our interpretable model can extract patterns that are considered to be the most important features for predicting virus sequences through learned filters.

  相似文献   

12.
组织病理学是临床上肿瘤诊断的金标准,直接关系到治疗的开展与预后的评估。来自临床的需求为组织病理诊断提出了质量与效率两个方面的挑战。组织病理诊断涉及大量繁重的病理切片判读任务,高度依赖医生的经验,但病理医生的培养周期长,人才储备缺口巨大,病理科室普遍超负荷工作。近年来出现的基于深度学习的组织病理辅助诊断方法可以帮助医生提高诊断工作的精度与速度,缓解病理诊断资源不足的问题,引起了研究人员的广泛关注。本文初步综述深度学习方法在组织病理学中的相关研究工作。介绍了组织病理诊断的医学背景,整理了组织病理学领域的主要数据集,重点介绍倍受关注的乳腺癌、淋巴结转移癌、结肠癌的病理数据及其分析任务。本文归纳了数据的存储与处理、模型的设计与优化以及小样本与弱标注学习这3项需要解决的技术问题。围绕这些问题,本文介绍了包括数据存储、数据预处理、分类模型、分割模型、迁移学习和多示例学习等相关研究工作。最后总结了面向组织病理学诊断的深度学习方法研究现状,并指出当下研究工作可能的改进方向。  相似文献   

13.
基于自适应加权融合的分布式滤波算法   总被引:1,自引:0,他引:1  
针对存在丢包的传感器网络中每个传感器节点对目标估计确信度不同的问题,提出一种基于自适应加权融合的分布式滤波算法.考虑节点在网络中的影响力及其节点属性,将节点重要度与传感器网络节点观测数据间的支持度线性加权,获得每个传感器节点对目标的估计确信度,并将该确信度构成的融合权值引入节点状态估计值的一致性协议中,更新传感器节点对目标的状态估计值,提高分布式滤波算法的估计精度和传感器节点估计值的一致性.仿真结果验证了所提出方法的有效性.  相似文献   

14.

The broadcast storm problem causes redundancy, contention and collision of messages in a network, particularly in vehicular ad hoc networks (VANETs) where number of participants can grow arbitrarily. This paper presents a solution to this problem in which a node is designated as a master through an election process. Moreover, an algorithm is proposed for asynchronous VANETs to select a master node, where the participants (i.e., vehicles) can communicate with each other directly (single-hop). The proposed algorithm is extrema-finding in a way that a node having maximum signal strength is elected as a master node and each vehicle continues communication with the master until the master node keeps its signal strength at the highest level and remains operational too. This paper further presents the Petri net-based modeling of the proposed algorithm for evaluation which is going to be presented for the first time in leader election algorithm in VANETs. Verification of the proposed algorithm is carried out through state space analysis technique.

  相似文献   

15.

Pruning is an effective technique in improving the generalization performance of decision tree. However, most of the existing methods are time-consuming or unsuitable for small dataset. In this paper, a new pruning algorithm based on structural risk of leaf node is proposed. The structural risk is measured by the product of the accuracy and the volume (PAV) in leaf node. The comparison experiments with Cost-Complexity Pruning using cross-validation (CCP-CV) algorithm on some benchmark datasets show that PAV pruning largely reduces the time cost of CCP-CV, while the test accuracy of PAV pruning is close to that of CCP-CV.

  相似文献   

16.

In this paper, an asymmetric hybrid cryptosystem utilizing four-dimensional (4D) hyperchaotic framework by means of coherent superposition and random decomposition in hybrid multi-resolution wavelet domain is put forward. The 4D hyperchaotic framework is utilized for creating permutation keystream for a pixel swapping procedure. The hybrid multi-resolution wavelet is formed by combining Walsh transform and fractional Fourier transform of various orders. The 4D hyperchaotic framework’s parameters and preliminary conditions alongside the fractional orders extend the key-space and consequently give additional strength to the proposed cryptosystem. The proposed cryptosystem has an extended key-space to avoid any brute-force attack and is nonlinear in nature. The scheme is validated on greyscale images. Computer-based simulations have been executed to validate the robustness of the proposed scheme against different types of attacks. Results demonstrate that the proposed cryptosystem along with offering higher protection against noise and occlusion attacks is also unassailable to special attack.

  相似文献   

17.

Internet of things is the backbone of the smart applications, which attracts many types of research on the state-of-the-art network applications. Enormous research on sensor networks left more devices that are sensible in the day-to-day life. Hence, implementing new sensor networks for smart applications is not necessary. Many researchers have accepted and utilized existing networks for their request. In this case, techniques for identifying and registering existing sensible things are on demand. This paper proposed a hybrid framework for sensor identification and registration (HSIR) for new IoT applications. This research proposing HSIR as a framework aimed for user-friendliness in the IoT as well as addressed toward the scalability requirement of IoT applications. This model uses content- and context-based multicast communication instead of broadcast to reduce energy and time consumption in sensor identification. HSIR also proposed a public key to register the new network for application requirements. The behaviour of the proposed model has been assayed in realistic with simulations and proved by comparing other models.

  相似文献   

18.

Manual analysis of the indirect-immunofluorescence (IIF) human epithelial cell Type-2 (HEp-2) cell image for the diagnosis of an auto-immune disease is a subjective and time-consuming process, and it is also prone to human-errors. The present work proposes an automatic capsule neural network (CapsNet) based framework for HEp-2 cell image classification to compensate for the deficiencies present in the prominent convolution neural network (CNN) based frameworks. In CNNs, the spatial relationship between the features present in the anti-nuclear antibodies (ANA) patterns, found in the IIF HEp-2 cell image (ANA-IIF image) is lost which increases the chance of detection of false-positives. In the proposed CapsNet based model, the max-pooling layer has been replaced with advanced dynamic routing algorithm and scalar outputs are replaced with the vector output, thus the richer representation of the same feature without the loss of spatial relationship with respect to the other features are made possible. The proposed framework recognizes ANA-IIF images with an average accuracy of 95.00% for 10-fold cross-validations. The experimental result also shows that the proposed model performs better than the other CNN based classification models for human epithelial cell image classification task.

  相似文献   

19.

提出一种基于自回归求和移动平均(ARIMA) 与人工神经网络(ANN) 的区间时间序列混合模型, 并用混合模型分别对区间中值序列和区间半径序列建模. 采用Monte Carlo 方法生成模拟区间序列, 分别用ARIMA、ANN和混合模型3 种方法进行建模和预测实验, 并用统计学方法检验模型误差. 最后分别采用3 种方法对H市轨道交通某号线牵引能耗区间序列进行了建模和预测, 实验结果表明混合模型的建模精度和预测性能均优于单一模型.

  相似文献   

20.
针对室内环境中传感器节点间的非视距传播会降低定位精度的情况,研究基于无线传感器网络的非视距节点定位方法。根据不同环境下信标节点的测量模型和视距传播概率建立目标函数,采用粒子群优化算法估计出未知节点的位置,将利用最小二乘法计算出的节点位置作为粒子的初始位置。仿真结果表明,通过与最小二乘法、残差加权和RANSAC算法相比较,所提出算法能够较好地削弱非视距误差,且具有更高的定位精度。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号