首页 | 官方网站   微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   871篇
  免费   144篇
  国内免费   63篇
工业技术   1078篇
  2024年   2篇
  2023年   31篇
  2022年   22篇
  2021年   46篇
  2020年   41篇
  2019年   36篇
  2018年   34篇
  2017年   38篇
  2016年   37篇
  2015年   49篇
  2014年   66篇
  2013年   63篇
  2012年   51篇
  2011年   56篇
  2010年   42篇
  2009年   52篇
  2008年   49篇
  2007年   39篇
  2006年   42篇
  2005年   57篇
  2004年   34篇
  2003年   18篇
  2002年   15篇
  2001年   29篇
  2000年   26篇
  1999年   15篇
  1998年   15篇
  1997年   10篇
  1996年   8篇
  1995年   9篇
  1994年   34篇
  1993年   9篇
  1991年   2篇
  1988年   1篇
排序方式: 共有1078条查询结果,搜索用时 15 毫秒
1.
Point counting represents a convenient and efficient technique for estimating the area of transects through multiple sclerosis (MS) lesions on magnetic resonance (MR) images obtained for sections through the brain. When sectioning has been performed according to the Cavalieri method, unbiased estimates of the total volume of MR-visible MS plaques can be obtained with a precision of 3–5% in 5–10 min.  相似文献   
2.
基于各向异性滤波和空间FCM的MRI图像分割方法   总被引:1,自引:0,他引:1  
针对具复杂目标和边界模糊的MRI图像中多感兴趣区域的分割中分割MRI图像软组织难的问题, 提出了一种基于各向异性滤波和空间模糊C-均值聚类(SFCM)的MRI图像分割方法; 用新型各向异性滤波对图像进行预处理, 解决去噪平滑的同时弱化图像细节的问题; 用邻域空间信息设计空间函数, 改进传统FCM的目标函数; 用图像的空间信息实现图像各目标准确分类、有效解决孤立区域的正确归类问题, 进而使分割区域完整; 用直方图拟合曲线初始化分类数和初始聚类中心, 加快算法迭代到最优解, 进而减少运行时间。通过实验证实了各向异性滤波和空间FCM的MRI图像分割方法的综合应用显著提高了分割灰度重叠、目标不连续和目标边界模糊的MRI图像的分割效果。  相似文献   
3.
Modern MRI measurements deliver volumetric and time‐varying blood‐flow data of unprecedented quality. Visual analysis of these data potentially leads to a better diagnosis and risk assessment of various cardiovascular diseases. Recent advances have improved the speed and quality of the imaging data considerably. Nevertheless, the data remains compromised by noise and a lack of spatiotemporal resolution. Besides imaging data, also numerical simulations are employed. These are based on mathematical models of specific features of physical reality. However, these models require realistic parameters and boundary conditions based on measurements. We propose to use data assimilation to bring measured data and physically‐based simulation together, and to harness the mutual benefits. The accuracy and noise robustness of the coupled approach is validated using an analytic flow field. Furthermore, we present a comparative visualization that conveys the differences between using conventional interpolation and our coupled approach.  相似文献   
4.
目的 针对现有神经网络模型需要对左心室心肌内膜和外膜单独建模的问题,本文提出了一种基于胶囊结构的心脏磁共振图像(magnetic resonance imaging,MRI)分割模型Seg-CapNet,旨在同时提取心肌内膜和外膜,并保证两者的空间位置关系。方法 首先利用胶囊网络将待分割目标转换成包含目标相对位置、颜色以及大小等信息的向量,然后使用全连接将这些向量的空间关系进行重组,最后采用反卷积对特征图进行上采样,将分割图还原为输入图像尺寸。在上采样过程中将每层特征图与卷积层的特征图进行连接,有助于图像细节还原以及模型的反向传播,加快训练过程。Seg-CapNet的输出向量不仅有图像的灰度、纹理等底层图像特征,还包含目标的位置、大小等语义特征,有效提升了目标图像的分割精度。为了进一步提高分割质量,还提出了一种新的损失函数用于约束分割结果以保持多目标区域间的相对位置关系。结果 在ACDC(automated cardiac diagnosis challenge)2017、MICCAI(medical image computing and computer-assisted intervention)2013和MICCAI2009等3个心脏MRI分割竞赛的公开数据集上对Seg-CapNet模型进行训练和验证,并与神经网络分割模型U-net和SegNet进行对比。实验结果表明,相对于U-Net和SegNet,Seg-CapNet同时分割目标重叠区域的平均Dice系数提升了3.5%,平均豪斯多夫距离(Hausdorff distance,HD)降低了18%。并且Seg-CapNet的参数量仅为U-Net的54%、SegNet的40%,在提升分割精度的同时,降低了训练时间和复杂度。结论 本文提出的Seg-CapNet模型在保证同时分割重叠区域目标的同时,降低了参数量,提升了训练速度,并保持了较好的左心室心肌内膜和外膜分割精度。  相似文献   
5.
6.
Segmentation of the left ventricle (LV) is a hot topic in cardiac magnetic resonance (MR) images analysis. In this paper, we present an automatic LV myocardial boundary segmentation method using the parametric active contour model (or snake model). By convolving the gradient map of an image, a fast external force named gradient vector convolution (GVC) is presented for the snake model. A circle-based energy is incorporated into the GVC snake model to extract the endocardium. With this prior constraint, the snake contour can conquer the unexpected local minimum stemming from artifacts and papillary muscle, etc. After the endocardium is detected, the original edge map around and within the endocardium is directly set to zero. This modified edge map is used to generate a new GVC force filed, which automatically pushes the snake contour directly to the epicardium by employing the endocardium result as initialization. Meanwhile, a novel shape-similarity based energy is proposed to prevent the snake contour from being strapped in faulty edges and to preserve weak boundaries. Both qualitative and quantitative evaluations on our dataset and the publicly available database (e.g. MICCAI 2009) demonstrate the good performance of our algorithm.  相似文献   
7.
Typically, brain MR images present significant intensity variation across patients and scanners. Consequently, training a classifier on a set of images and using it subsequently for brain segmentation may yield poor results. Adaptive iterative methods usually need to be employed to account for the variations of the particular scan. These methods are complicated, difficult to implement and often involve significant computational costs. In this paper, a simple, non-iterative method is proposed for brain MR image segmentation. Two preprocessing techniques, namely intensity-inhomogeneity-correction, and more importantly MR image intensity standardization, used prior to segmentation, play a vital role in making the MR image intensities have a tissue-specific numeric meaning, which leads us to a very simple brain tissue segmentation strategy.Vectorial scale-based fuzzy connectedness and certain morphological operations are utilized first to generate the brain intracranial mask. The fuzzy membership value of each voxel within the intracranial mask for each brain tissue is then estimated. Finally, a maximum likelihood criterion with spatial constraints taken into account is utilized in classifying all voxels in the intracranial mask into different brain tissue groups. A set of inhomogeneity corrected and intensity standardized images is utilized as a training data set. We introduce two methods to estimate fuzzy membership values. In the first method, called SMG (for simple membership based on a gaussian model), the fuzzy membership value is estimated by fitting a multivariate Gaussian model to the intensity distribution of each brain tissue whose mean intensity vector and covariance matrix are estimated and fixed from the training data sets. The second method, called SMH (for simple membership based on a histogram), estimates fuzzy membership value directly via the intensity distribution of each brain tissue obtained from the training data sets. We present several studies to evaluate the performance of these two methods based on 10 clinical MR images of normal subjects and 10 clinical MR images of Multiple Sclerosis (MS) patients. A quantitative comparison indicates that both methods have overall better accuracy than the k-nearest neighbors (kNN) method, and have much better efficiency than the Finite Mixture (FM) model-based Expectation-Maximization (EM) method. Accuracy is similar for our methods and EM method for the normal subject data sets, but much better for our methods for the patient data sets.  相似文献   
8.
结合人类视觉特性,针对CT/MRI医学图像的特点,提出了一种基于非下采样Contourlet变换的图像融合算法。先对源图像作非下采样Contourlet变换,完成图像的多尺度分析和方向分析。充分考虑各尺度分解层的系数特征,对低通子带,基于评价准则最优,采用免疫克隆选择优化策略迭代获取近似最优融合权值;对高通子带则选取绝对值最大作融合。实验结果表明:分别与基于小波、非下采样小波,以及Contourlet的融合结果相比较,文中融合算法获得的融合图像边缘的清晰度,以及整体的对比度都有所改善。  相似文献   
9.
一种基于遗传算法的脑MR图像去偏移场模型   总被引:1,自引:0,他引:1       下载免费PDF全文
由于磁共振图像(magnetic resonance images,MRI)常含有偏移场而影响后继图像分割,针对这种图像的分割,采用Legendre多项式基函数来拟合偏移场,可以去除偏移场对图像分割的影响。当使得恢复图像的信息熵达到最小时,则求得的偏移场最优。在求偏移场的过程中,需要求解基函数的参数,由于传统的梯度下降法易陷入局部最优,为解决此问题,提出将遗传算法引入到参数求解过程中,然而传统的遗传算法不仅时间复杂度高,且易陷入局部最优,为此需对遗传算法进行改进,使得不仅更容易得到全局最优解,且时间复杂度较低。实验证明,该改进算法可以得到精确的偏移场,并可得到准确的分割结果。  相似文献   
10.
Fast SE imaging provides considerable measure time reduction, high signal-to-noise ratios as well as similar contrast behavior compared to conventional SE sequences. Besides TR and TEeff, echo train length (ETL), interecho time , and-space trajectory determine image contrast and image quality in fast SE sequences. True proton density contrast (CSF hypointense) and not too strong T2 contrast are essential requirements in routine brain MRI. A Turbo SE sequence with very short echo train length (ETL=3), short TEeff and short interecho time (17 ms), and TR=2000 ms was selected for proton density contrast; a Turbo SE sequence with ETL=7, TEeff=90 ms, =22 ms, and TR=3250 ms was selected for T2-weighted images. Using both single-echo Turbo SE sequences yielded 50% measure time reduction compared to the conventional SE technique. Conventional SE and optimized Turbo SE sequences were compared in 150 patients resulting in very similar signal and contrast behavior. Furthermore, reduced flow artifacts in proton density—and especially in T2-weighted Turbo SE images—and better contrast of high-intensity lesions in proton density-weighted Turbo SE images were found. Slightly reduced edge sharpness—mainly in T2-weighted Turbo SE images—did not reduce diagnostic reliability. Differences between conventional and Turbo SE images concerning image contrast and quality are explained regarding special features of fast SE technique.Address for correspondence: Institut für Röntgendiagnostik, Klinikum der Universität Regensburg, Franz-Josef-Strauß-Allee 11, 93042 Regensburg, Germany. Additional reprints of this chapter may be obtained from the Reprints Department, Chapman & Hall, One Venn Plaza, New York, NY 10119.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号