首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到15条相似文献,搜索用时 93 毫秒
1.
针对Blow-CAST-Fish算法攻击轮数有限和复杂度高等问题,提出一种基于差分表的Blow-CAST-Fish算法的密钥恢复攻击。首先,对S盒的碰撞性进行分析,分别基于两个S盒和单个S盒的碰撞,构造6轮和12轮差分特征;然后,计算轮函数f3的差分表,并在特定差分特征的基础上扩充3轮,从而确定密文差分与f3的输入、输出差分的关系;最后,选取符合条件的明文进行加密,根据密文差分计算f3的输入、输出差分值,并查寻差分表找到对应的输入、输出对,从而获取子密钥。在两个S盒碰撞的情况下,所提攻击实现了9轮Blow-CAST-Fish算法的差分攻击,比对比攻击多1轮,时间复杂度由2107.9降低到274;而在单个S盒碰撞的情况下,所提攻击实现了15轮Blow-CAST-Fish算法的差分攻击,与对比攻击相比,虽然攻击轮数减少了1轮,但弱密钥比例由2-52.4提高到2-42,数据复杂度由254降低到247。测试结果表明,在相同差分特征基础上,基于差分表的攻击的攻击效率更高。  相似文献   

2.
王梅  许传海  刘勇 《计算机应用》2021,41(12):3462-3467
多核学习方法是一类重要的核学习方法,但大多数多核学习方法存在如下问题:多核学习方法中的基核函数大多选择传统的具有浅层结构的核函数,在处理数据规模大且分布不平坦的问题时表示能力较弱;现有的多核学习方法的泛化误差收敛率大多为O1/n,收敛速度较慢。为此,提出了一种基于神经正切核(NTK)的多核学习方法。首先,将具有深层次结构的NTK作为多核学习方法的基核函数,从而增强多核学习方法的表示能力。然后,根据主特征值比例度量证明了一种收敛速率可达O1/n的泛化误差界;在此基础上,结合核对齐度量设计了一种全新的多核学习算法。最后,在多个数据集上进行了实验,实验结果表明,相比Adaboost和K近邻(KNN)等分类算法,新提出的多核学习算法具有更高的准确率和更好的表示能力,也验证了所提方法的可行性与有效性。  相似文献   

3.
朱槐雨  李博 《计算机应用》2021,41(11):3234-3241
无人机(UAV)航拍图像视野开阔,图像中的目标较小且边缘模糊,而现有单阶段多框检测器(SSD)目标检测模型难以准确地检测航拍图像中的小目标。为了有效地解决原有模型容易漏检的问题,借鉴特征金字塔网络(FPN)提出了一种基于连续上采样的SSD模型。改进SSD模型将输入图像尺寸调整为320×320,新增Conv3_3特征层,将高层特征进行上采样,并利用特征金字塔结构对VGG16网络前5层特征进行融合,从而增强各个特征层的语义表达能力,同时重新设计先验框的尺寸。在公开航拍数据集UCAS-AOD上训练并验证,实验结果表明,所提改进SSD模型的各类平均精度均值(mAP)达到了94.78%,与现有SSD模型相比,其准确率提升了17.62%,其中飞机类别提升了4.66%,汽车类别提升了34.78%。  相似文献   

4.
蒋楚钰  方李西  章宁  朱建明 《计算机应用》2022,42(11):3438-3443
针对公证人机制中存在的公证人节点职能集中以及跨链交易效率较低等问题,提出一种基于公证人组的跨链交互安全模型。首先,将公证人节点分为三类角色,即交易验证者、连接者和监督者,由交易验证组成员打包经过共识的多笔交易成一笔大的交易,并利用门限签名技术对它进行签名;其次,被确认的交易会被置于跨链待转账池中,连接者随机选取多笔交易,利用安全多方计算和同态加密等技术判断交易的真实性;最后,若打包所有符合条件的交易的哈希值真实可靠且被交易验证组验证过,则连接者可以继续执行多笔跨链交易的批处理任务,并与区块链进行信息交互。安全性分析表明,该跨链机制有助于保护信息的机密性和数据的完整性,实现数据在不出库的情况下的协同计算,保障区块链跨链系统的稳定性。与传统的跨链交互安全模型相比,所提模型的签名次数和需要分配公证人组数的复杂度从O(n)降为O(1)。  相似文献   

5.
高闯  唐冕  赵亮 《计算机应用》2021,41(12):3702-3706
针对现有表位预测方法对抗原中存在的重叠表位预测能力不佳的问题,提出了将基于局部度量(L-Metric)的重叠子图发现算法用于表位预测的模型。首先,利用抗原上的表面原子构建原子图并升级为氨基酸残基图;然后,利用基于信息流的图划分算法将氨基酸残基图划分为互不重叠的种子子图,并使用基于L-Metric的重叠子图发现算法对种子子图进行扩展以得到重叠子图;最后,利用由图卷积网络(GCN)和全连接网络(FCN)构建的分类模型将扩展后的子图分类为抗原表位和非抗原表位。实验结果表明,所提出的模型在相同数据集上的F1值与现有表位预测模型DiscoTope 2、ElliPro、EpiPred和Glep相比分别提高了267.3%、57.0%、65.4%和3.5%。同时,消融实验结果表明,所提出的重叠子图发现算法能够有效改善预测能力,使用该算法的模型相较于未使用该算法的模型的F1值提高了19.2%。  相似文献   

6.
为评价阴影消除植被指数(Shadow-Eliminated Vegetation Index, SEVI)对常用十米级不同空间分辨率遥感影像的地形阴影消除效果,采用2019年1月24~25日过境的Sentinel S2B(10 m)、GF-1(16 m)、Landsat 8 OLI(30 m)、GF-4(50 m)4种空间分辨率多光谱影像,计算了基于地表反射率的NDVI、SEVI和基于SCS+C模型校正后反射率的NDVI。评价方法包括植被指数数值分析、本影和落影相对误差分析、变异系数分析、植被指数与太阳入射角余弦值(cosi)散点图分析等。评价结果显示:4种空间分辨率的SEVI在本影相对误差分别为2.172%、1.422%、1.351%、1.060%;对应落影的相对误差分别为2.598%、2.801%、3.795%、2.711%;相应SEVI与cosi的决定系数分别为0.017 3、0.010 7、0.001 1、0.000 1;相应变异系数分别为10.036%、9.070%、8.051%、1.631%。研究结果表明,SEVI对10~50 m不同空间分辨率遥感影像的地形阴影校正效果良好,优于用SCS+C模型校正后的地表反射率计算的NDVI;遥感影像的地形阴影效应随着空间分辨率降低而减弱。  相似文献   

7.
针对脉冲噪声干扰环境下传统稀疏自适应滤波稳态性能差,甚至无法收敛等问题,同时为提高稀疏参数辨识的精度的同时不增加过多计算代价,提出了一种基于广义最大Versoria准则(GMVC)的稀疏自适应滤波算法——带有CIM约束的GMVC(CIMGMVC)。首先,利用广义Versoria函数作为学习准则,其包含误差p阶矩的倒数形式,当脉冲干扰出现导致误差非常大时,GMVC将趋近于0,从而达到抑制脉冲噪声的目的。其次,将互相关熵诱导维度(CIM)作为稀疏惩罚约束和GMVC相结合来构建新代价函数,其中的CIM以高斯概率密度函数为基础,当选择合适核宽度时,可无限逼近于l0-范数。最后,应用梯度法推导出CIMGMVC算法,并分析了所提算法的均方收敛性。在Matlab平台上采用α-stable分布模型产生脉冲噪声进行仿真,实验结果表明所提出的CIMGMVC算法能有效地抑制非高斯脉冲噪声的干扰,在稳健性方面优于传统稀疏自适应滤波,且稳态误差低于GMVC算法。  相似文献   

8.
基于Sentinel-1合成孔径雷达 (SAR) 数据及相同时段的中分辨率成像光谱仪(MODIS)和Landsat 8两种归一化植被指数(NDVI),构建变化检测模型以估算黑河中游的高分辨率土壤水分,并探讨模型中具体参数设置对估算精度的影响。结果表明:①在对后向散射系数时间序列的差值 ( Δ σ ) 和植被指数 ( V I ) 进行线性建模过程中,MODIS NDVI和Landsat 8 NDVI这两种植被产品所构建的模型在 Δ σ - V I 空间中所选取的采样点比例分别为2%和4%时,各自取得最优精度; ②以土壤水分反演为目标,使用Landsat 8 NDVI构建的变化检测模型略优于使用MODIS NDVI构建的变化检测模型,两种模型的均方根误差RMSE分别为0.040 m3/m3和0.044 m3/m3,相关系数R分别为0.86和0.83; ③对于变化检测方法的关键参数,若使用低分辨率的SMAP/Sentinel-1 L2_SM_SP土壤水分数据分别代替站点观测的土壤水分初始值和缩放因子 (即两个连续时相土壤水分变化的最大值 Δ M s m a x ) 这两个参数,则土壤水分RMSE将分别增加0.01 m3/m3和0.04 m3/m3。即土壤水分缩放因子这一参数的误差对反演结果的影响大于土壤水分初始值误差对反演结果的影响,故采用高精度的缩放因子进行变化检测估算。研究结论对于利用新兴的Sentinel-1 SAR数据,通过变化检测算法准确获取高分辨率土壤水分信息具有实际参考价值。  相似文献   

9.
函数查询是大数据应用中重要的操作,查询解答问题一直是数据库理论中的核心问题。为了分析大数据上函数查询解答问题的复杂度,首先,使用映射归约方法将函数查询语言归约到已知的可判定语言,证明了函数查询解答问题的可计算性;其次,使用一阶语言描述函数查询,并分析了一阶语言的复杂度;在此基础上,使用NC-factor归约方法将函数查询类归约到已知的ΠΤQ-complete类中。证明函数查询解答问题经过PTIME(多项式时间)预处理后,可以在NC(并行多项式-对数)时间内求解。通过以上证明可以推出,函数查询解答问题在大数据上是可处理的。  相似文献   

10.
根据JAMBU模型的结构特点——相关数据和明文可以相互转化,利用伪造攻击等基本思想提出了“随机数重复利用”的分析方法。结果表明,该分析所需数据复杂度为 2 n 2 ,时间复杂度为 4× 2 n 2 。与已有分析结果相比,该分析数据复杂度更低。  相似文献   

11.
In this paper, we consider the k-prize-collecting minimum vertex cover problem with submodular penalties, which generalizes the well-known minimum vertex cover problem, minimum partial vertex cover problem and minimum vertex cover problem with submodular penalties. We are given a cost graph G=(V,E;c) and an integer k. This problem determines a vertex set SV such that S covers at least k edges. The objective is to minimize the total cost of the vertices in S plus the penalty of the uncovered edge set, where the penalty is determined by a submodular function. We design a two-phase combinatorial algorithm based on the guessing technique and the primal-dual framework to address the problem. When the submodular penalty cost function is normalized and nondecreasing, the proposed algorithm has an approximation factor of 3. When the submodular penalty cost function is linear, the approximation factor of the proposed algorithm is reduced to 2, which is the best factor if the unique game conjecture holds.  相似文献   

12.
With the increasing amount of data, there is an urgent need for efficient sorting algorithms to process large data sets. Hardware sorting algorithms have attracted much attention because they can take advantage of different hardware’s parallelism. But the traditional hardware sort accelerators suffer “memory wall” problems since their multiple rounds of data transmission between the memory and the processor. In this paper, we utilize the in-situ processing ability of the ReRAM crossbar to design a new ReCAM array that can process the matrix-vector multiplication operation and the vector-scalar comparison in the same array simultaneously. Using this designed ReCAM array, we present ReCSA, which is the first dedicated ReCAM-based sort accelerator. Besides hardware designs, we also develop algorithms to maximize memory utilization and minimize memory exchanges to improve sorting performance. The sorting algorithm in ReCSA can process various data types, such as integer, float, double, and strings. We also present experiments to evaluate the performance and energy efficiency against the state-of-the-art sort accelerators. The experimental results show that ReCSA has 90.92×, 46.13×, 27.38×, 84.57×, and 3.36× speedups against CPU-, GPU-, FPGA-, NDP-, and PIM-based platforms when processing numeric data sets. ReCSA also has 24.82×, 32.94×, and 18.22× performance improvement when processing string data sets compared with CPU-, GPU-, and FPGA-based platforms.  相似文献   

13.
On-line transaction processing (OLTP) systems rely on transaction logging and quorum-based consensus protocol to guarantee durability, high availability and strong consistency. This makes the log manager a key component of distributed database management systems (DDBMSs). The leader of DDBMSs commonly adopts a centralized logging method to writing log entries into a stable storage device and uses a constant log replication strategy to periodically synchronize its state to followers. With the advent of new hardware and high parallelism of transaction processing, the traditional centralized design of logging limits scalability, and the constant trigger condition of replication can not always maintain optimal performance under dynamic workloads. In this paper, we propose a new log manager named Salmo with scalable logging and adaptive replication for distributed database systems. The scalable logging eliminates centralized contention by utilizing a highly concurrent data structure and speedy log hole tracking. The kernel of adaptive replication is an adaptive log shipping method, which dynamically adjusts the number of log entries transmitted between leader and followers based on the real-time workload. We implemented and evaluated Salmo in the open-sourced transaction processing systems Cedar and DBx1000. Experimental results show that Salmo scales well by increasing the number of working threads, improves peak throughput by 1.56× and reduces latency by more than 4× over log replication of Raft, and maintains efficient and stable performance under dynamic workloads all the time.  相似文献   

14.
A k-CNF (conjunctive normal form) formula is a regular (k, s)-CNF one if every variable occurs s times in the formula, where k≥2 and s>0 are integers. Regular (3, s)- CNF formulas have some good structural properties, so carrying out a probability analysis of the structure for random formulas of this type is easier than conducting such an analysis for random 3-CNF formulas. Some subclasses of the regular (3, s)-CNF formula have also characteristics of intractability that differ from random 3-CNF formulas. For this purpose, we propose strictly d-regular (k, 2s)-CNF formula, which is a regular (k, 2s)-CNF formula for which d≥0 is an even number and each literal occurs sd2 or s+d2 times (the literals from a variable x are x and ¬x, where x is positive and ¬x is negative). In this paper, we present a new model to generate strictly d-regular random (k, 2s)-CNF formulas, and focus on the strictly d-regular random (3, 2s)-CNF formulas. Let F be a strictly d-regular random (3, 2s)-CNF formula such that 2s>d. We show that there exists a real number s0 such that the formula F is unsatisfiable with high probability when s>s0, and present a numerical solution for the real number s0. The result is supported by simulated experiments, and is consistent with the existing conclusion for the case of d= 0. Furthermore, we have a conjecture: for a given d, the strictly d-regular random (3, 2s)-SAT problem has an SAT-UNSAT (satisfiable-unsatisfiable) phase transition. Our experiments support this conjecture. Finally, our experiments also show that the parameter d is correlated with the intractability of the 3-SAT problem. Therefore, our research maybe helpful for generating random hard instances of the 3-CNF formula.  相似文献   

15.
At ToSC 2019, Ankele et al. proposed a novel idea for constructing zero-correlation linear distinguishers in a related-tweakey model. This paper further clarifies this principle and gives a search model for zero-correlation distinguishers. As a result, for the first time, the authors construct 15-round and 17-round zero-correlation linear distinguishers for SKINNY-n-2n and SKINNY-n-3n, respectively, which are both two rounds longer than Anekele et al.’s. Based on these distinguishers, the paper presents related-tweakey zero-correlation linear attacks on 22-round SKINNY-n-2n and 26-round SKINNY-n-3n, respectively.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号