目的 基于神经网络的图像超分辨率重建技术主要是通过单一网络非线性映射学习得到高低分辨率之间特征信息关系来进行重建,在此过程中较浅网络的图像特征信息很容易丢失,加深网络深度又会增加网络训练时间和训练难度。针对此过程出现的训练时间长、重建结果细节信息较模糊等问题,提出一种多通道递归残差学习机制,以提高网络训练效率和图像重建质量。方法 设计一种多通道递归残差网络模型,该模型首先利用递归方法将残差网络块进行复用,形成32层递归网络,来减少网络参数、增加网络深度,以加速网络收敛并获取更丰富的特征信息。然后采集不同卷积核下的特征信息,输入到各通道对应的递归残差网络后再一起输入到共用的重建网络中,提高对细节信息的重建能力。最后引入一种交叉学习机制,将通道1、2、3两两排列组合交叉相连,进一步加速不同通道特征信息融合、促进参数传递、提高网络重建性能。结果 本文模型使用DIV2K (DIVerse 2K)数据集进行训练,在Set5、Set14、BSD100和Urban100数据集上进行测试,并与Bicubic、SRCNN (super-resolution convolutional neural network)、VDSR (super-resolution using very deep convolutional network)、LapSRN (deep Laplacian pyramid networks for fast and accurate super-resolution)和EDSR_baseline (enhanced deep residual networks for single image super-resolution_baseline)等方法的实验结果进行对比,结果显示前者获取细节特征信息能力提高,图像有了更清晰丰富的细节信息;客观数据方面,本文算法的数据有明显的提升,尤其在细节信息较多的Urban100数据集中PSNR (peak signal-to-noise ratio)平均分别提升了3.87 dB、1.93 dB、1.00 dB、1.12 dB和0.48 dB,网络训练效率相较非递归残差网络提升30%。结论 本文模型可获得更好的视觉效果和客观质量评价,而且相较非递归残差网络训练过程耗时更短,可用于复杂场景下图像的超分辨率重建。 相似文献
This article analyzes the bias dependence of gate‐drain capacitance (Cgd) and gate‐source capacitance (Cgs) in the AlGaN/GaN high electron mobility transistors under a high drain‐to‐source voltage (Vds) from the perspective of channel shape variation, and further simplifies Cgd and Cgs to be gate‐to‐source voltage (Vgs) dependent only at high Vds. This method can significantly reduce the number of parameters to be fitted in Cgd and Cgs and therefore lower the difficulty of model development. The Angelov capacitance models are chosen for verifying the effectiveness of simplification. Good agreement between simulated and measured small‐signal S‐parameters, large‐signal power sweep, and power contours comprehensively proves the accuracy of this simplification method. 相似文献
Promoting the spatial resolution of hyperspectral sensors is expected to improve computer vision tasks. However, due to the physical limitations of imaging sensors, the hyperspectral image is often of low spatial resolution. In this paper, we propose a new hyperspectral image super-resolution method from a low-resolution (LR) hyperspectral image and a high resolution (HR) multispectral image of the same scene. The reconstruction of HR hyperspectral image is formulated as a joint estimation of the hyperspectral dictionary and the sparse codes based on the spatial-spectral sparsity of the hyperspectral image. The hyperspectral dictionary is learned from the LR hyperspectral image. The sparse codes with respect to the learned dictionary are estimated from LR hyperspectral image and the corresponding HR multispectral image. To improve the accuracy, both spectral dictionary learning and sparse coefficients estimation exploit the spatial correlation of the HR hyperspectral image. Experiments show that the proposed method outperforms several state-of-art hyperspectral image super-resolution methods in objective quality metrics and visual performance.