首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 500 毫秒
1.
Information-theoretically secure (ITS) authentication is needed in quantum key distribution (QKD). In this paper, we study security of an ITS authentication scheme proposed by Wegman & Carter, in the case of partially known authentication key. This scheme uses a new authentication key in each authentication attempt, to select a hash function from an Almost Strongly Universal \(_2\) hash function family. The partial knowledge of the attacker is measured as the trace distance between the authentication key distribution and the uniform distribution; this is the usual measure in QKD. We provide direct proofs of security of the scheme, when using partially known key, first in the information-theoretic setting and then in terms of witness indistinguishability as used in the universal composability (UC) framework. We find that if the authentication procedure has a failure probability \(\varepsilon \) and the authentication key has an \(\varepsilon ^{\prime }\) trace distance to the uniform, then under ITS, the adversary’s success probability conditioned on an authentic message-tag pair is only bounded by \(\varepsilon +|\mathcal T |\varepsilon ^{\prime }\) , where \(|\mathcal T |\) is the size of the set of tags. Furthermore, the trace distance between the authentication key distribution and the uniform increases to \(|\mathcal T |\varepsilon ^{\prime }\) after having seen an authentic message-tag pair. Despite this, we are able to prove directly that the authenticated channel is indistinguishable from an (ideal) authentic channel (the desired functionality), except with probability less than \(\varepsilon +\varepsilon ^{\prime }\) . This proves that the scheme is ( \(\varepsilon +\varepsilon ^{\prime }\) )-UC-secure, without using the composability theorem.  相似文献   

2.
Partial information leakages of generation key undoubtedly influence the security of practical Quantum Key Distribution (QKD) system. In this paper, based on finite-key analysis and deep investigation on privacy amplification, we present a method for characterizing information leakages gained by adversary in each authentication round and therefore take the theory derived by Cederlöf and Larsson (IEEE Trans Inf Theory 54:1735–1741, 2008) into practical case. As the authentication key is fed from one round of generation keys to the next except the first round, by considering its security weakness due to information leakages and finite size effect, we further propose a universal formula for calculating the lifetime of initial authentication key used in QKD with finite resources. Numerical simulations indicate that our bound for estimating information leakages strictly characterizes the stability of practical QKD against information-leakage-based attacks, and our calculation formula in terms of lifetime can precisely evaluate the usage time of initial authentication key. Our work provides a practical solution for evaluating authentication security of QKD.  相似文献   

3.
Identity theft is the most recurrent twenty-first century cybercrime. Thus, authentication is of utmost significance as the number of hackers who seek to intrigue into legitimate user’s account to obtain sensitive information is increasing. Identity based authentication operates to corroborate the identity of the user so that only the legitimate user gets access to the service. This paper proposes a quantum identity based authentication and key agreement scheme for cloud server architecture. Quantum cryptography based on the laws of quantum physics is a vital technology for securing privacy and confidentiality in the field of network security. A formal security analysis has been performed using AVISPA tool that confirms the security of the proposed scheme. The security analysis of the proposed protocol proves that it is robust against all security attacks. To confirm applicability of quantum key distribution in cloud computing, a practical long-distance entanglement-based QKD experiment has been proposed. This experiment confirms successful generation of shifted keys over distance of 100 km of optical fiber with a key rate of 4.11 bit/s and an error rate of 9.21 %.  相似文献   

4.
We propose an optical scheme for quantum key distribution in which bits are encoded in relative phases of four bipartite weak coherent states ${|\alpha, \alpha\rangle, |-\alpha, -\alpha\rangle, |-\alpha, \alpha\rangle}$ and ${|\alpha, -\alpha \rangle}$ , with respect to a strong reference pulse. We discuss security of the scheme against eavesdropping strategies like, photon number splitting, photon beam splitting and intercept-resend attacks. It is found that present scheme is more sensitive against these eavesdropping strategies than the two-dimensional non-orthogonal state based protocol and BB84 protocol. Our scheme is very simple, requires only passive optical elements like beam splitters, phase shifters and photon detectors, hence is at the reach of presently available technology.  相似文献   

5.
Laser defects in emitting single photon, photon signal attenuation and propagation of error cause our serious headaches in practical long-distance quantum key distribution (QKD) experiment for a long time. In this paper, we study the uncertainty principle in metrology and use this tool to analyze the statistical fluctuation of the number of received single photons, the yield of single photons and quantum bit error rate (QBER). After that we calculate the error between measured value and real value of every parameter, and concern the propagation error among all the measure values. We paraphrase the Gottesman–Lo–Lutkenhaus–Preskill (GLLP) formula in consideration of those parameters and generate the QKD simulation result. In this study, with the increase in coding photon length, the safe distribution distance is longer and longer. When the coding photon’s length is \(N = 10^{11}\), the safe distribution distance can be almost 118 km. It gives a lower bound of safe transmission distance than without uncertainty principle’s 127 km. So our study is in line with established theory, but we make it more realistic.  相似文献   

6.
Existing classical post-processing (CPP) schemes for quantum key distribution (QKD)-based quantum private queries (QPQs) including the \(kN\rightarrow N\), \(N\rightarrow N\), and \(rM\rightarrow N\) ones have been found imperfect in terms of communication efficiency and security. In this paper, we propose a novel CPP scheme for QKD-based QPQs. The proposed CPP scheme reduces the communication complexity and improves the security of QKD-based QPQ protocols largely. Furthermore, the proposed CPP scheme can provide a multi-bit query efficiently.  相似文献   

7.
We calculate the fidelity with which an arbitrary state can be encoded into a [7, 1, 3] Calderbank-Shor-Steane quantum error correction code in a non-equiprobable Pauli operator error environment with the goal of determining whether this encoding can be used for practical implementations of quantum computation. The determination of usability is accomplished by applying ideal error correction to the encoded state which demonstrates the correctability of errors that occurred during the encoding process. We also apply single-qubit Clifford gates to the encoded state and determine the accuracy with which these gates can be implemented. Finally, fault tolerant noisy error correction is applied to the encoded states allowing us to compare noisy (realistic) and perfect error correction implementations. We find the encoding to be usable for the states ${|0\rangle, |1\rangle}$ , and ${|\pm\rangle = |0\rangle\pm|1\rangle}$ . These results have implications for when non-fault tolerant procedures may be used in practical quantum computation and whether quantum error correction must be applied at every step in a quantum protocol.  相似文献   

8.
The non-conforming immersed finite element method (IFEM) developed in Li et al. (Numer Math 96:61–98, 2003) for interface problems is extensively studied in this paper. The non-conforming IFEM is very much like the standard finite element method but with modified basis functions that enforce the natural jump conditions on interface elements. While the non-conforming IFEM is simple and has reasonable accuracy, it is not fully second order accurate due to the discontinuities of the modified basis functions. While the conforming IFEM also developed in Li et al. (Numer Math 96:61–98, 2003) is fully second order accurate, the implementation is more complicated. A new symmetric and consistent IFEM has been developed in this paper. The new method maintains the advantages of the non-conforming IFEM by using the same basis functions but it is symmetric, consistent, and more important, it is second order accurate. The idea is to add some correction terms to the weak form to take into account of the discontinuities in the basis functions. Optimal error estimates are derived for the new symmetric and consistent IFE method in the \(L^2\) and \(H^1\) norms. Numerical examples presented in this paper confirm the theoretical analysis and show that the new developed IFE method has \(O(h^2)\) convergence in the \(L^\infty \) norm as well.  相似文献   

9.
By using the \(\chi \) -type entangled states, a novel scheme for multi-party quantum state sharing (MQSTS) of an arbitrary multi-qubit state is investigated. It is shown that the MQSTS scheme can be faithfully realized by performing appropriate Bell state measurements, Z basis measurements and local unitary operations, rather than multi-qubit entanglement or multi-particle joint measurements. Thus, our MQSTS scheme is more convenient in a practical application than some previous schemes. Furthermore, its intrinsic efficiency for qubits approaches 100 %, and the total efficiency really approaches the maximal value, which is higher than those of the previous MQSTS schemes. Finally, we analyze the security from the views of participant attack and outside attack in detail.  相似文献   

10.
In quantum key distribution (QKD), the information theoretically secure authentication is necessary to guarantee the integrity and authenticity of the exchanged information over the classical channel. In order to reduce the key consumption, the authentication scheme with key recycling (KR), in which a secret but fixed hash function is used for multiple messages while each tag is encrypted with a one-time pad (OTP), is preferred in QKD. Based on the assumption that the OTP key is perfect, the security of the authentication scheme has be proved. However, the OTP key of authentication in a practical QKD system is not perfect. How the imperfect OTP affects the security of authentication scheme with KR is analyzed thoroughly in this paper. In a practical QKD, the information of the OTP key resulting from QKD is partially leaked to the adversary. Although the information leakage is usually so little to be neglected, it will lead to the increasing degraded security of the authentication scheme as the system runs continuously. Both our theoretical analysis and simulation results demonstrate that the security level of authentication scheme with KR, mainly indicated by its substitution probability, degrades exponentially in the number of rounds and gradually diminishes to zero.  相似文献   

11.
We present a data structure that stores a sequence s[1..n] over alphabet [1..σ] in $n\mathcal{H}_{0}(s) + o(n)(\mathcal {H}_{0}(s){+}1)$ bits, where $\mathcal{H}_{0}(s)$ is the zero-order entropy of s. This structure supports the queries access, rank and select, which are fundamental building blocks for many other compressed data structures, in worst-case time ${\mathcal{O} ( {\lg\lg\sigma} )}$ and average time ${\mathcal{O} ( {\lg\mathcal{H}_{0}(s)} )}$ . The worst-case complexity matches the best previous results, yet these had been achieved with data structures using $n\mathcal{H}_{0}(s)+o(n\lg \sigma)$ bits. On highly compressible sequences the o(nlgσ) bits of the redundancy may be significant compared to the $n\mathcal{H}_{0}(s)$ bits that encode the data. Our representation, instead, compresses the redundancy as well. Moreover, our average-case complexity is unprecedented. Our technique is based on partitioning the alphabet into characters of similar frequency. The subsequence corresponding to each group can then be encoded using fast uncompressed representations without harming the overall compression ratios, even in the redundancy. The result also improves upon the best current compressed representations of several other data structures. For example, we achieve (i) compressed redundancy, retaining the best time complexities, for the smallest existing full-text self-indexes; (ii) compressed permutations π with times for π() and π ?1() improved to loglogarithmic; and (iii) the first compressed representation of dynamic collections of disjoint sets. We also point out various applications to inverted indexes, suffix arrays, binary relations, and data compressors. Our structure is practical on large alphabets. Our experiments show that, as predicted by theory, it dominates the space/time tradeoff map of all the sequence representations, both in synthetic and application scenarios.  相似文献   

12.
The notion of plaintext awareness ( ${\mathsf{PA}}$ ) has many applications in public key cryptography: it offers unique, stand-alone security guarantees for public key encryption schemes, has been used as a sufficient condition for proving indistinguishability against adaptive chosen-ciphertext attacks ( ${\mathsf{IND}\hbox {-}{\mathsf{CCA}}}$ ), and can be used to construct privacy-preserving protocols such as deniable authentication. Unlike many other security notions, plaintext awareness is very fragile when it comes to differences between the random oracle and standard models; for example, many implications involving ${\mathsf{PA}}$ in the random oracle model are not valid in the standard model and vice versa. Similarly, strategies for proving ${\mathsf{PA}}$ of schemes in one model cannot be adapted to the other model. Existing research addresses ${\mathsf{PA}}$ in detail only in the public key setting. This paper gives the first formal exploration of plaintext awareness in the identity-based setting and, as initial work, proceeds in the random oracle model. The focus is laid mainly on identity-based key encapsulation mechanisms (IB-KEMs), for which the paper presents the first definitions of plaintext awareness, highlights the role of ${\mathsf{PA}}$ in proof strategies of ${\mathsf{IND}\hbox {-}{\mathsf{CCA}}}$ security, and explores relationships between ${\mathsf{PA}}$ and other security properties. On the practical side, our work offers the first, highly efficient, general approach for building IB-KEMs that are simultaneously plaintext-aware and ${\mathsf{IND}\hbox {-}{\mathsf{CCA}}}$ -secure. Our construction is inspired by the Fujisaki-Okamoto (FO) transform, but demands weaker and more natural properties of its building blocks. This result comes from a new look at the notion of $\gamma $ -uniformity that was inherent in the original FO transform. We show that for IB-KEMs (and PK-KEMs), this assumption can be replaced with a weaker computational notion, which is in fact implied by one-wayness. Finally, we give the first concrete IB-KEM scheme that is ${\mathsf{PA}}$ and ${\mathsf{IND}\hbox {-}{\mathsf{CCA}}}$ -secure by applying our construction to a popular IB-KEM and optimizing it for better performance.  相似文献   

13.
One dimensional or linear bar code has been used for distribution purposes such as product information and distribution channel identification. Those linear bar codes can support only one directional code layout and also support limited code error detection capability. Two dimensional bar codes (e.g., QR code) extending one dimensional bar codes were developed in database and index based types. Database type barcodes embed full information bits and show weak recognition rate with geometric distortion. Index-based embed only the index information and requires additional network servers to interpret the index information, which leads to limited information storage capacity. Instead of using visible bar codes, we propose CDPC (circular dot pattern code), which is a dot based codes which is more invisible than bar codes. We design CDPC to be more robust to geometric distortion and noise than previous coding schemes using circular template matching. To maximize the information capacity and robustness, we use a circular dot patterns which is more robust to affine transformation. Code can be easily extended according to the number of data circles. If the number of data circle is n, then we can embed \( \left( {5\sum {_{{k = 2}}^{{n + 1}}{\text{k}}} } \right) - {\text{n}} \) data bits. In our experimentation, we set the number of circle to three, and resulting information capacity can be 42 bits per one code. To extract information from a CDPC codes, we perform (1) image capture, (2) identification of dots, (3) graph based topological analysis of dot patterns, (4) template matching between topological graphs using position symbols, and (5) information bit extraction with error correction capability. To evaluate information capacity under various geometric distortions, we experiment our CDPC with StirMark Benchmark’s affine transformation (simulation of geometric and noise attacks) and with real cell phone image captures. Our experimental results also show that our CDPC scheme achieves more robust recognition performance than those proposed in previous research works including QR code.  相似文献   

14.
Forms are our gates to the Web. They enable us to access the deep content of Web sites. Automatic form understanding provides applications, ranging from crawlers over meta-search engines to service integrators, with a key to this content. Yet, it has received little attention other than as component in specific applications such as crawlers or meta-search engines. No comprehensive approach to form understanding exists, let alone one that produces rich models for semantic services or integration with linked open data. In this paper, we present opal, the first comprehensive approach to form understanding and integration. We identify form labeling and form interpretation as the two main tasks involved in form understanding. On both problems, opal advances the state of the art: For form labeling, it combines features from the text, structure, and visual rendering of a Web page. In extensive experiments on the ICQ and TEL-8 benchmarks and a set of 200 modern Web forms, opal outperforms previous approaches for form labeling by a significant margin. For form interpretation, opal uses a schema (or ontology) of forms in a given domain. Thanks to this domain schema, it is able to produce nearly perfect ( $>$ > 97 % accuracy in the evaluation domains) form interpretations. Yet, the effort to produce a domain schema is very low, as we provide a datalog-based template language that eases the specification of such schemata and a methodology for deriving a domain schema largely automatically from an existing domain ontology. We demonstrate the value of opal’s form interpretations through a light-weight form integration system that successfully translates and distributes master queries to hundreds of forms with no error, yet is implemented with only a handful translation rules.  相似文献   

15.
Compressed representations have become effective to store and access large Web and social graphs, in order to support various graph querying and mining tasks. The existing representations exploit various typical patterns in those networks and provide basic navigation support. In this paper, we obtain unprecedented results by finding “dense subgraph” patterns and combining them with techniques such as node orderings and compact data structures. On those representations, we support out-neighbor and out/in-neighbor queries, as well as mining queries based on the dense subgraphs. First, we propose a compression scheme for Web graphs that reduces edges by representing dense subgraphs with “virtual nodes”; over this scheme, we apply node orderings and other compression techniques. With this approach, we match the best current compression ratios that support out-neighbor queries (i.e., nodes pointed from a given node), using 1.0–1.8 bits per edge (bpe) on large Web graphs, and retrieving each neighbor of a node in 0.6–1.0 microseconds ( \(\upmu \) s). When supporting both out- and in-neighbor queries, instead, our technique generally offers the best time when using little space. If the reduced graph, instead, is represented with a compact data structure that supports bidirectional navigation, we obtain the most compact Web graph representations (0.9–1.5 bpe) that support out/in-neighbor navigation; yet, the time per neighbor extracted raises to around 5–20  \(\upmu \) s. We also propose a compact data structure that represents dense subgraphs without using virtual nodes. It allows us to recover out/in-neighbors and answer other more complex queries on the dense subgraphs identified. This structure is not competitive on Web graphs, but on social networks, it achieves 4–13 bpe and 8–12  \(\upmu \) s per out/in-neighbor retrieved, which improves upon all existing representations.  相似文献   

16.
The behavior of total quantum correlations (discord) in dimers consisting of dipolar-coupled spins 1/2 are studied. We found that the discord $Q=0$ at absolute zero temperature. As the temperature $T$ increases, the quantum correlations in the system increase at first from zero to its maximum and then decrease to zero according to the asymptotic law $T^{-2}$ . It is also shown that in absence of external magnetic field $B$ , the classical correlations $C$ at $T\rightarrow 0$ are, vice versa, maximal. Our calculations predict that in crystalline gypsum $\hbox {CaSO}_{4}\cdot \hbox {2H}_{2}{\hbox {O}}$ the value of natural $(B=0)$ quantum discord between nuclear spins of hydrogen atoms is maximal at the temperature of 0.644  $\upmu $ K, and for 1,2-dichloroethane $\hbox {H}_{2}$ ClC– $\hbox {CH}_{2}{\hbox {Cl}}$ the discord achieves the largest value at $T=0.517~\upmu $ K. In both cases, the discord equals $Q\approx 0.083$  bit/dimer what is $8.3\,\%$ of its upper limit in two-qubit systems. We estimate also that for gypsum at room temperature $Q\sim 10^{-18}$  bit/dimer, and for 1,2-dichloroethane at $T=90$  K the discord is $Q\sim 10^{-17}$  bit per a dimer.  相似文献   

17.
Multi-dimensional color image processing has two difficulties: One is that a large number of bits are needed to store multi-dimensional color images, such as, a three-dimensional color image of $1024 \times 1024 \times 1024$ needs $1024 \times 1024 \times 1024 \times 24$  bits. The other one is that the efficiency or accuracy of image segmentation is not high enough for some images to be used in content-based image search. In order to solve the above problems, this paper proposes a new representation for multi-dimensional color image, called a $(n\,+\,1)$ -qubit normal arbitrary quantum superposition state (NAQSS), where $n$ qubits represent colors and coordinates of ${2^n}$ pixels (e.g., represent a three-dimensional color image of $1024 \times 1024 \times 1024$ only using 30 qubits), and the remaining 1 qubit represents an image segmentation information to improve the accuracy of image segmentation. And then we design a general quantum circuit to create the NAQSS state in order to store a multi-dimensional color image in a quantum system and propose a quantum circuit simplification algorithm to reduce the number of the quantum gates of the general quantum circuit. Finally, different strategies to retrieve a whole image or the target sub-image of an image from a quantum system are studied, including Monte Carlo sampling and improved Grover’s algorithm which can search out a coordinate of a target sub-image only running in $O(\sqrt{N/r} )$ where $N$ and $r$ are the numbers of pixels of an image and a target sub-image, respectively.  相似文献   

18.
By the moment closure of the Boltzmann transport equation, the extended hydrodynamic models for electron transport have been derived in Cai et al. (J Math Phys 53:103503, 2012). With the numerical scheme developed in Li et al. (2012) recently, it has been demonstrated that the derived extended hydrodynamic models can capture the major features of the solution of kinetic equations. As the application of the models and the numerical scheme proposed therein, we in this paper carry out the numerical simulation to investigate the carrier transport in $n^{+}$ - $n$ - $n^{+}$ structures by solving the moment system derived from the Boltzmann–Poisson equations. Without any additional empirical parameters than that used in directly solving the kinetic equations, we obtain numerical results by the extended hydrodynamic models with very satisfied agreement with the solution of the kinetic equations, even in case that the length of the channel is as short as 50 nm.  相似文献   

19.
An ongoing line of research has shown super-polynomial lower bounds for uniform and slightly-non-uniform small-depth threshold and arithmetic circuits (Allender, in Chicago J. Theor. Comput. Sci. 1999(7), 1999; Koiran and Perifel, in Proceedings of the 24th Annual IEEE Conference on Computational Complexity (CCC 2009), pp. 35–40, 2009; Jansen and Santhanam, in Proceedings of the 38th International Colloquium on Automata, Languages and Programming (ICALP 2011), I, pp. 724–735, 2011). We give a unified framework that captures and improves each of the previous results. Our main results are that Permanent does not have threshold circuits of the following kinds.
  1. Depth O(1), n o(1) bits of non-uniformity, and size n O(1).
  2. Depth O(1), polylog(n) bits of non-uniformity, and size s(n) such that for all constants c the c-fold composition of s, s (c)(n), is less than 2 n .
  3. Depth o(loglogn), polylog(n) bits of non-uniformity, and size n O(1).
(1) strengthens a result of Jansen and Santhanam (Jansen and Santhanam, in Proceedings of the 38th International Colloquium on Automata, Languages and Programming (ICALP 2011), I, pp. 724–735, 2011), who obtained similar parameters but for arithmetic circuits of constant depth rather than Boolean threshold circuits. (2) and (3) strengthen results of Allender (Allender, in Chicago J. Theor. Comput. Sci. 1999(7), 1999) and Koiran and Perifel (Koiran and Perifel, in Proceedings of the 24th Annual IEEE Conference on Computational Complexity (CCC 2009), pp. 35–40, 2009), respectively, who obtained results with similar parameters but for completely uniform circuits. Our main technical contribution is to simplify and unify earlier proofs in this area, and adapt the proofs to handle some amount of non-uniformity. We also develop a notion of circuits with a small amount of non-uniformity that naturally interpolates between fully uniform and fully non-uniform circuits. We use this notion, which we term weak uniformity, rather than the earlier and essentially equivalent notion of succinctness used by Jansen and Santhanam because the notion of weak uniformity more fully and easily interpolates between full uniformity and non-uniformity of circuits.  相似文献   

20.
Christian Kreuzer 《Calcolo》2013,50(2):79-110
We generalize the a posteriori techniques for the linear heat equation in Verfürth (Calcolo 40(3):195–212, 2003) to the case of the nonlinear parabolic $p$ -Laplace problem thereby proving reliable and efficient a posteriori error estimates for a fully discrete implicite Euler Galerkin finite element scheme. The error is analyzed using the so-called quasi-norm and a related dual error expression. This leads to equivalence of the error and the residual, which is the key property for proving the error bounds.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号