首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
针对2019年12月在中国武汉发现的新型冠状病毒,由于RT-PCR检测具有假阴性率过高且得出结果会花费大量时间等问题,研究证明计算机断层扫描(CT)已经成为了辅助诊断和治疗新型冠状病毒肺炎的重要手段之一。由于目前公开的COVID-19 CT数据集较少,提出利用条件生成对抗网络进行数据增强以获得更多样本的CT数据集,以此降低发生过拟合风险;另外提出一种基于BIN残差块的改进U-Net网络来进行图像分割,再结合多层感知器进行分类预测。通过与AlexNet、GoogleNet等网络模型进行比较,得出提出的BUF-Net网络模型性能最优,达到了93%的准确率。利用Grad-CAM技术对系统的输出进行可视化,能够更加直观地说明CT影像对于诊断COVID-19的重要作用。将深度学习技术应用到医学影像中有助于协助放射科医生获得更为有效的诊断。  相似文献   

2.
The outbreak of the novel coronavirus has spread worldwide, and millions of people are being infected. Image or detection classification is one of the first application areas of deep learning, which has a significant contribution to medical image analysis. In classification detection, one or more images (detection) are usually used as input, and diagnostic variables (such as whether there is a disease) are used as output. The novel coronavirus has spread across the world, infecting millions of people. Early-stage detection of critical cases of COVID-19 is essential. X-ray scans are used in clinical studies to diagnose COVID-19 and Pneumonia early. For extracting the discriminative features through these modalities, deep convolutional neural networks (CNNs) are used. A siamese convolutional neural network model (COVID-3D-SCNN) is proposed in this study for the automated detection of COVID-19 by utilizing X-ray scans. To extract the useful features, we used three consecutive models working in parallel in the proposed approach. We acquired 575 COVID-19, 1200 non-COVID, and 1400 pneumonia images, which are publicly available. In our framework, augmentation is used to enlarge the dataset. The findings suggest that the proposed method outperforms the results of comparative studies in terms of accuracy 96.70%, specificity 95.55%, and sensitivity 96.62% over (COVID-19 vs. non-COVID19 vs. Pneumonia).  相似文献   

3.
The COVID-19 virus has fatal effect on lung function and due to its rapidity the early detection is necessary at the moment. The radiographic images have already been used by the researchers for the early diagnosis of COVID-19. Though several existing research exhibited very good performance with either x-ray or computer tomography (CT) images, to the best of our knowledge no such work has reported the assembled performance of both x-ray and CT images. Thus increase in accuracy with higher scalability is the main concern of the recent research. In this article, an integrated deep learning model has been developed for detection of COVID-19 at an early stage using both chest x-ray and CT images. The lack of publicly available data about COVID-19 disease motivates the authors to combine three benchmark datasets into a single dataset of large size. The proposed model has applied various transfer learning techniques for feature extraction and to find out the best suite. Finally the capsule network is used to categorize the sub-dataset into COVID positive and normal patients. The experimental results show that, the best performance exhibits by the ResNet50 with capsule network as an extractor-classifier pair with the combined dataset, which is composed of 575 numbers of x-ray images and 930 numbers of CT images. The proposed model achieves accuracy of 98.2% and 97.8% with x-ray and CT images, respectively, and an average of 98%.  相似文献   

4.
Coronavirus disease (COVID-19) is a pandemic that has caused thousands of casualties and impacts all over the world. Most countries are facing a shortage of COVID-19 test kits in hospitals due to the daily increase in the number of cases. Early detection of COVID-19 can protect people from severe infection. Unfortunately, COVID-19 can be misdiagnosed as pneumonia or other illness and can lead to patient death. Therefore, in order to avoid the spread of COVID-19 among the population, it is necessary to implement an automated early diagnostic system as a rapid alternative diagnostic system. Several researchers have done very well in detecting COVID-19; however, most of them have lower accuracy and overfitting issues that make early screening of COVID-19 difficult. Transfer learning is the most successful technique to solve this problem with higher accuracy. In this paper, we studied the feasibility of applying transfer learning and added our own classifier to automatically classify COVID-19 because transfer learning is very suitable for medical imaging due to the limited availability of data. In this work, we proposed a CNN model based on deep transfer learning technique using six different pre-trained architectures, including VGG16, DenseNet201, MobileNetV2, ResNet50, Xception, and EfficientNetB0. A total of 3886 chest X-rays (1200 cases of COVID-19, 1341 healthy and 1345 cases of viral pneumonia) were used to study the effectiveness of the proposed CNN model. A comparative analysis of the proposed CNN models using three classes of chest X-ray datasets was carried out in order to find the most suitable model. Experimental results show that the proposed CNN model based on VGG16 was able to accurately diagnose COVID-19 patients with 97.84% accuracy, 97.90% precision, 97.89% sensitivity, and 97.89% of F1-score. Evaluation of the test data shows that the proposed model produces the highest accuracy among CNNs and seems to be the most suitable choice for COVID-19 classification. We believe that in this pandemic situation, this model will support healthcare professionals in improving patient screening.  相似文献   

5.
Huang  Zhenxing  Liu  Xinfeng  Wang  Rongpin  Zhang  Mudan  Zeng  Xianchun  Liu  Jun  Yang  Yongfeng  Liu  Xin  Zheng  Hairong  Liang  Dong  Hu  Zhanli 《Applied Intelligence》2021,51(5):2838-2849

The novel coronavirus (COVID-19) pneumonia has become a serious health challenge in countries worldwide. Many radiological findings have shown that X-ray and CT imaging scans are an effective solution to assess disease severity during the early stage of COVID-19. Many artificial intelligence (AI)-assisted diagnosis works have rapidly been proposed to focus on solving this classification problem and determine whether a patient is infected with COVID-19. Most of these works have designed networks and applied a single CT image to perform classification; however, this approach ignores prior information such as the patient’s clinical symptoms. Second, making a more specific diagnosis of clinical severity, such as slight or severe, is worthy of attention and is conducive to determining better follow-up treatments. In this paper, we propose a deep learning (DL) based dual-tasks network, named FaNet, that can perform rapid both diagnosis and severity assessments for COVID-19 based on the combination of 3D CT imaging and clinical symptoms. Generally, 3D CT image sequences provide more spatial information than do single CT images. In addition, the clinical symptoms can be considered as prior information to improve the assessment accuracy; these symptoms are typically quickly and easily accessible to radiologists. Therefore, we designed a network that considers both CT image information and existing clinical symptom information and conducted experiments on 416 patient data, including 207 normal chest CT cases and 209 COVID-19 confirmed ones. The experimental results demonstrate the effectiveness of the additional symptom prior information as well as the network architecture designing. The proposed FaNet achieved an accuracy of 98.28% on diagnosis assessment and 94.83% on severity assessment for test datasets. In the future, we will collect more covid-CT patient data and seek further improvement.

  相似文献   

6.
目的 新冠肺炎疫情席卷全球,为快速诊断肺炎患者,确认患者肺部感染区域,大量检测网络相继提出,但现有网络大多只能处理一种任务,即诊断或分割。本文提出了一种融合多头注意力机制的联合诊断与分割网络,能同时完成X线胸片的肺炎诊断分类和新冠感染区分割。方法 整个网络由3部分组成,双路嵌入层通过两种不同的图像嵌入方式分别提取X线胸片的浅层直观特征和深层抽象特征;Transformer模块综合考虑提取到的浅层直观与深层抽象特征;分割解码器扩大特征图以输出分割区域。为响应联合训练,本文使用了一种混合损失函数以动态平衡分类与分割的训练。分类损失定义为分类对比损失与交叉熵损失的和;分割损失是二分类的交叉熵损失。结果 基于6个公开数据集的合并数据实验结果表明,所提网络取得了95.37%的精度、96.28%的召回率、95.95%的F1指标和93.88%的kappa系数,诊断分类性能超过了主流的ResNet50、VGG16(Visual Geometry Group)和Inception_v3等网络;在新冠病灶分割表现上,相比流行的U-Net及其改进网络,取得最高的精度(95.96%),优异的敏感度(78.89%)、最好的Dice系数(76.68%)和AUC(area under ROC curve)指标(98.55%);效率上,每0.56 s可输出一次诊断分割结果。结论 联合网络模型使用Transformer架构,通过自注意力机制关注全局特征,通过交叉注意力综合考虑深层抽象特征与浅层高级特征,具有优异的分类与分割性能。  相似文献   

7.
深度卷积神经网络(convolutional neural networks, CNN)作为特征提取器(feature extractor, CNN--FE)已被广泛应用于许多领域并获得显著成功. 根据研究评测可知CNN--FE具有大量参数, 这大大限制了CNN--FE在如智能手机这样的内存有限的设备上的应用. 本文以AlexNet卷积神经网络特征提取器为研究对象, 面向图像分类问题, 在保持图像分类性能几乎不变的情况下减少CNN--FE模型参数量. 通过对AlexNet各层参数分布的详细分析, 作者发现其全连接层包含了大约99%的模型参数, 在图像分类类别较少的情况, AlexNet提取的特征存在冗余. 因此, 将CNN--FE模型压缩问题转化为深度特征选择问题, 联合考虑分类准确率和压缩率, 本文提出了一种新的基于互信息量的特征选择方法, 实现CNN--FE模型压缩. 在公开场景分类数据库以及自建的无线胶囊内窥镜(wireless capsule endoscope, WCE)气泡图片数据库上进行图像分类实验. 结果表明本文提出的CNN--FE模型压缩方法减少了约83%的AlexNet模型参数且其分类准确率几乎保持不变.  相似文献   

8.
In order to realize the fertility detection and classification of hatching eggs, a method based on deep learning is proposed in this paper. The 5-days hatching eggs are divided into fertile eggs, dead eggs and infertile eggs. Firstly, we combine the transfer learning strategy with convolutional neural network (CNN). Then, we use a network of two branches. In the first branch, the dataset is pre-trained with the model trained by AlexNet network on large-scale ImageNet dataset. In the second branch, the dataset is directly trained on a multi-layer network which contains six convolutional layers and four pooling layers. The features of these two branches are combined as input to the following fully connected layer. Finally, a new model is trained on a small-scale dataset by this network and the final accuracy of our method is 99.5%. The experimental results show that the proposed method successfully solves the multi-classification problem in small-scale dataset of hatching eggs and obtains high accuracy. Also, our model has better generalization ability and can be adapted to eggs of diversity.  相似文献   

9.
Segmenting regions of lung infection from computed tomography (CT) images shows excellent potential for rapid and accurate quantifying of Coronavirus disease 2019 (COVID-19) infection and determining disease development and treatment approaches. However, a number of challenges remain, including the complexity of imaging features and their variability with disease progression, as well as the high similarity to other lung diseases, which makes feature extraction difficult. To answer the above challenges, we propose a new sequence encoder and lightweight decoder network for medical image segmentation model (SELDNet). (i) Construct sequence encoders and lightweight decoders based on Transformer and deep separable convolution, respectively, to achieve different fine-grained feature extraction. (ii) Design a semantic association module based on cross-attention mechanism between encoder and decoder to enhance the fusion of different levels of semantics. The experimental results showed that the network can effectively achieve segmentation of COVID-19 infected regions. The dice of the segmentation result was 79.1%, the sensitivity was 76.3%, and the specificity was 96.7%. Compared with several state-of-the-art image segmentation models, our proposed SELDNet model achieves better results in the segmentation task of COVID-19 infected regions.  相似文献   

10.
新型冠状病毒肺炎在全球范围迅速蔓延,为快速准确地对其诊断,进而阻断疫情传播链,提出一种基于深度学习的分类网络DLDA-A-DenseNet。首先将深层密集聚合结构与DenseNet-201结合,对不同阶段的特征信息聚合,以加强对病灶的识别及定位能力;其次提出高效多尺度长程注意力以细化聚合的特征;此外针对CT图像数据集类别不均衡问题,使用均衡抽样训练策略消除偏向性。在中国胸部CT图像调查研究会提供的数据集上测试,所提方法较原始DenseNet-201在准确率、召回率、精确率、F1分数和Kappa系数提高了2.24%、3.09%、2.09%、2.60%和3.48%;并在COVID-CISet图像数据集上测试,取得99.50%的最优准确率。结果表明,对比其他方法,提出的新冠肺炎CT图像分类方法充分提取了CT切片的病灶特征,具有更高的精度和良好的泛化性。  相似文献   

11.
在社交媒体上发布和传播有关新冠的谣言对民生、经济、社会等都产生了严重影响,因此通过机器学习和人工智能技术开展新冠谣言检测具有重要的研究价值和社会意义.现有谣言检测研究,一般假定进行建模和预测的事件已有充足的有标签数据,但对于新冠这类突发事件,由于可训练样本较少,所以此类模型存在局限性.该文聚焦少样本谣言检测问题,旨在使...  相似文献   

12.
Chinese calligraphy draws a lot of attention for its beauty and elegance. The various styles of calligraphic characters make calligraphy even more charming. But it is not always easy to recognize the calligraphic style correctly, especially for beginners. In this paper, an automatic character styles representation for recognition method is proposed. Three kinds of features are extracted to represent the calligraphic characters. Two of them are typical hand-designed features: the global feature, GIST and the local feature, scale invariant feature transform. The left one is deep feature which is extracted by a deep convolutional neural network (CNN). The state-of-the-art classifier modified quadratic discriminant function was employed to perform recognition. We evaluated our method on two calligraphic character datasets, the unconstraint real-world calligraphic character dataset (CCD) and SCL (the standard calligraphic character library). And we also compare MQDF with other two classifiers, support vector machine and neural network, to perform recognition. In our experiments, all three kinds of feature are evaluated with all three classifiers, respectively, finding that deep feature is the best feature for calligraphic style recognition. We also fine-tune the deep CNN (alex-net) in Krizhevsky et al. (Advances in Neural Information Processing Systems, pp. 1097–1105, 2012) to perform calligraphic style recognition. It turns out our method achieves about equal accuracy comparing with the fine-tuned alex-net but with much less training time. Furthermore, the algorithm style discrimination evaluation is developed to evaluate the discriminative style quantitatively.  相似文献   

13.
传统的基于卷积神经网络的车型识别算法存在识别相似车型的准确率不高,以及在网络训练时只能使用图像的灰度图从而丢失了图像的颜色信息等缺陷。对此,提出一种基于深度卷积神经网络(Deep Convolution Neural Network,DCNN)的提取图像特征的方法,运用深度卷积神经网络对背景较复杂的车型进行网络训练,以达到识别车型的目的。文中采用先进的深度学习框架Caffe,基于AlexNet结构提出了深度卷积神经网络的模型,分别对车型的图像进行训练,并与传统CNN算法进行比较。实验结果显示,DCNN网络模型的准确率达到了96.9%,比其他算法的准确率更高。  相似文献   

14.
Aim: COVID-19 is a disease caused by a new strain of coronavirus. Up to 18th October 2020, worldwide there have been 39.6 million confirmed cases resulting in more than 1.1 million deaths. To improve diagnosis, we aimed to design and develop a novel advanced AI system for COVID-19 classification based on chest CT (CCT) images.Methods: Our dataset from local hospitals consisted of 284 COVID-19 images, 281 community-acquired pneumonia images, 293 secondary pulmonary tuberculosis images; and 306 healthy control images. We first used pretrained models (PTMs) to learn features, and proposed a novel (L, 2) transfer feature learning algorithm to extract features, with a hyperparameter of number of layers to be removed (NLR, symbolized as L). Second, we proposed a selection algorithm of pretrained network for fusion to determine the best two models characterized by PTM and NLR. Third, deep CCT fusion by discriminant correlation analysis was proposed to help fuse the two features from the two models. Micro-averaged (MA) F1 score was used as the measuring indicator. The final determined model was named CCSHNet.Results: On the test set, CCSHNet achieved sensitivities of four classes of 95.61%, 96.25%, 98.30%, and 97.86%, respectively. The precision values of four classes were 97.32%, 96.42%, 96.99%, and 97.38%, respectively. The F1 scores of four classes were 96.46%, 96.33%, 97.64%, and 97.62%, respectively. The MA F1 score was 97.04%. In addition, CCSHNet outperformed 12 state-of-the-art COVID-19 detection methods.Conclusions: CCSHNet is effective in detecting COVID-19 and other lung infectious diseases using first-line clinical imaging and can therefore assist radiologists in making accurate diagnoses based on CCTs.  相似文献   

15.
针对机器学习模型对音乐流派特征识别能力较弱的问题,提出了一种基于深度卷积神经网络的音乐流派识别(DCNN-MGR)模型。该模型首先通过快速傅里叶变换提取音频信息,生成可以输入DCNN的频谱并切割生成频谱切片。然后通过融合带泄露整流(Leaky ReLU)函数、双曲正切(Tanh)函数和Softplus分类器对AlexNet进行增强。其次将生成的频谱切片输入增强的AlexNet进行多批次的训练与验证,提取并学习音乐特征,得到可以有效分辨音乐特征的网络模型。最后使用输出模型进行音乐流派识别测试。实验结果表明,增强的AlexNet在音乐特征识别准确率和网络收敛效果上明显优于AlexNet及其他常用的DCNN、DCNN-MGR模型在音乐流派识别准确率上比其他机器学习模型提升了4%~20%。  相似文献   

16.
目的 糖尿病性视网膜病变(DR)是目前比较严重的一种致盲眼病,因此,对糖尿病性视网膜病理图像的自动分类具有重要的临床应用价值。基于人工分类视网膜图像的方法存在判别性特征提取困难、分类性能差、耗时费力且很难得到客观统一的医疗诊断等问题,为此,提出一种基于卷积神经网络和分类器的视网膜病理图像自动分类系统。方法 首先,结合现有的视网膜图像的特点,对图像进行去噪、数据扩增、归一化等预处理操作;其次,在AlexNet网络的基础上,在网络的每一个卷积层和全连接层前引入一个批归一化层,得到一个网络层次更复杂的深度卷积神经网络BNnet。BNnet网络用于视网膜图像的特征提取网络,对其训练时采用迁移学习的策略利用ILSVRC2012数据集对BNnet网络进行预训练,再将训练得到的模型迁移到视网膜图像上再学习,提取用于视网膜分类的深度特征;最后,将提取的特征输入一个由全连接层组成的深度分类器将视网膜图像分为正常的视网膜图像、轻微病变的视网膜图像、中度病变的视网膜图像等5类。结果 实验结果表明,本文方法的分类准确率可达0.93,优于传统的直接训练方法,且具有较好的鲁棒性和泛化性。结论 本文提出的视网膜病理图像分类框架有效地避免了人工特征提取和图像分类的局限性,同时也解决了样本数据不足而导致的过拟合问题。  相似文献   

17.
目的 新型冠状病毒肺炎(corona virus disease 2019,COVID-19)患者肺部计算机断层扫描(computed tomography,CT)图像具有明显的病变特征,快速而准确地从患者肺部CT图像中分割出病灶部位,对COVID-19患者快速诊断和监护具有重要意义。COVID-19肺炎病灶区域复杂多变,现有方法分割精度不高,且对假阴性的关注不够,导致分割结果往往具有较高的特异度,但灵敏度却很低。方法 本文提出了一个基于深度学习的多尺度编解码网络(MED-Net (multiscale encode decode network)),该网络采用资源利用率高、计算速度快的HarDNet68(harmonic densely connected network)作为主干,它主要由5个harmonic dense block (HDB)组成,首先通过5个空洞空间卷积池化金字塔(atrous spatial pyramid pooling,ASPP)对HarDNet68的第1个卷积层和第1、3、4、5个HDB提取多尺度特征。接着在并行解码器(paralleled partial decoder,PPD)基础上设计了一个多尺度的并行解码器(multiscale parallel partial decoder,MPPD),通过对3个不同感受野的分支进行解码,解决了编码器部分的信息丢失及小病灶分割困难等问题。为了提升CT图像分割精度,降低网络学习难度,网络加入了深度监督机制,配合多尺度解码器,增加了对假阴性的关注,从而提高模型的灵敏度。结果 在COVID-19 CT segmentation数据集上对本文网络进行了测试。实验结果表明,MED-Net可以有效地应对数据集样本少,以及分割目标的纹理、尺寸和位置变异大等问题。在只有50幅训练图像和50幅测试图像的数据集上,分割结果的Dice系数为73.8%,灵敏度为77.7%,特异度为94.3%;与Inf-Net (lung infection segmentation deep network)网络相比,分别提升了8.21%、12.28%、7.76%。其中,Dice系数和灵敏度达到了目前基于该数据集相同划分方式的先进水平。结论 本文网络提高了COVID-19肺炎CT图像分割精确度,有效解决了数据集的数据量少、小病灶分割难度大等问题,具有全自动分割COVID-19肺炎CT图像的能力。  相似文献   

18.

The coronavirus COVID-19 pandemic is today’s major public health crisis, we have faced since the Second World War. The pandemic is spreading around the globe like a wave, and according to the World Health Organization’s recent report, the number of confirmed cases and deaths are rising rapidly. COVID-19 pandemic has created severe social, economic, and political crises, which in turn will leave long-lasting scars. One of the countermeasures against controlling coronavirus outbreak is specific, accurate, reliable, and rapid detection technique to identify infected patients. The availability and affordability of RT-PCR kits remains a major bottleneck in many countries, while handling COVID-19 outbreak effectively. Recent findings indicate that chest radiography anomalies can characterize patients with COVID-19 infection. In this study, Corona-Nidaan, a lightweight deep convolutional neural network (DCNN), is proposed to detect COVID-19, Pneumonia, and Normal cases from chest X-ray image analysis; without any human intervention. We introduce a simple minority class oversampling method for dealing with imbalanced dataset problem. The impact of transfer learning with pre-trained CNNs on chest X-ray based COVID-19 infection detection is also investigated. Experimental analysis shows that Corona-Nidaan model outperforms prior works and other pre-trained CNN based models. The model achieved 95% accuracy for three-class classification with 94% precision and recall for COVID-19 cases. While studying the performance of various pre-trained models, it is also found that VGG19 outperforms other pre-trained CNN models by achieving 93% accuracy with 87% recall and 93% precision for COVID-19 infection detection. The model is evaluated by screening the COVID-19 infected Indian Patient chest X-ray dataset with good accuracy.

  相似文献   

19.
In this paper, we propose a hybrid deep neural network model for recognizing human actions in videos. A hybrid deep neural network model is designed by the fusion of homogeneous convolutional neural network (CNN) classifiers. The ensemble of classifiers is built by diversifying the input features and varying the initialization of the weights of the neural network. The convolutional neural network classifiers are trained to output a value of one, for the predicted class and a zero, for all the other classes. The outputs of the trained classifiers are considered as confidence value for prediction so that the predicted class will have a confidence value of approximately 1 and the rest of the classes will have a confidence value of approximately 0. The fusion function is computed as the maximum value of the outputs across all classifiers, to pick the correct class label during fusion. The effectiveness of the proposed approach is demonstrated on UCF50 dataset resulting in a high recognition accuracy of 99.68%.  相似文献   

20.
Convolutional networks are currently the most popular computer vision methods for a wide variety of applications in multimedia research fields. Most recent methods have focused on solving problems with natural images and usually use a training database, such as Imagenet or Openimage, to detect the characteristics of the objects. However, in practical applications, training samples are difficult to acquire. In this study, we develop a powerful approach that can accurately learn marine organisms. The proposed filtering deep convolutional network (FDCNet) classifies deep-sea objects better than state-of-the-art classification methods, such as AlexNet, GoogLeNet, ResNet50, and ResNet101. The classification accuracy of the proposed FDCNet method is 1.8%, 2.9%, 2.0%, and 1.0% better than AlexNet, GooLeNet, ResNet50, and ResNet101, respectively. In addition, we have built the first marine organism database, Kyutech10K, with seven categories (i.e., shrimp, squid, crab, shark, sea urchin, manganese, and sand).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号