首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 421 毫秒
1.
This paper presents analytical and numerical results for two new anisotropic modifications of the Rational and Clark-α LES models. The main difference from their standard form is that in this study horizontal (as opposed to isotropic) spatial filtering is used, which is appropriate for turbulent mixing in stratified flows. We present several mathematical results regarding the horizontal Rational and Clark-α LES models. We also present numerical experiments that support the analytical developments and show that both horizontal LES models perform better than their standard, isotropic counterparts in approximating mixing in a 3D lock-exchange problem at Reynolds number Re=10,000.  相似文献   

2.
According to a classical result of Grünbaum, the transversal number τ(ℱ) of any family ℱ of pairwise-intersecting translates or homothets of a convex body C in ℝ d is bounded by a function of d. Denote by α(C) (resp. β(C)) the supremum of the ratio of the transversal number τ(ℱ) to the packing number ν(ℱ) over all finite families ℱ of translates (resp. homothets) of a convex body C in ℝ d . Kim et al. recently showed that α(C) is bounded by a function of d for any convex body C in ℝ d , and gave the first bounds on α(C) for convex bodies C in ℝ d and on β(C) for convex bodies C in the plane.  相似文献   

3.
We present a PDE observer that estimates the velocity, pressure, electric potential and current fields in a magnetohydrodynamic (MHD) channel flow, also known as Hartmann flow. This flow is characterized by an electrically conducting fluid moving between parallel plates in the presence of an externally imposed transverse magnetic field. The system is described by the inductionless MHD equations, a combination of the Navier-Stokes equations and a Poisson equation for the electric potential under the so-called inductionless MHD approximation in a low magnetic Reynolds number regime. We identify physical quantities (measurable on the wall of the channel) that are sufficient to generate convergent estimates of the velocity, pressure, and electric potential field away from the walls. Our observer consists of a copy of the linearized MHD equations, combined with linear injection of output estimation error, with observer gains designed using backstepping. Pressure, skin friction and current measurements from one of the walls are used for output injection. For zero magnetic field or nonconducting fluid, the design reduces to an observer for the Navier-Stokes Poiseuille flow, a benchmark for flow control and turbulence estimation. We show that for the linearized MHD model the estimation error converges to zero in the L2 norm. Despite being a subject of practical interest, the problem of observer design for nondiscretized 3-D MHD or Navier-Stokes channel flow has so far been an open problem.  相似文献   

4.
Speed scaling is a power management technique that involves dynamically changing the speed of a processor. This gives rise to dual-objective scheduling problems, where the operating system both wants to conserve energy and optimize some Quality of Service (QoS) measure of the resulting schedule. Yao, Demers, and Shenker (Proc. IEEE Symp. Foundations of Computer Science, pp. 374–382, 1995) considered the problem where the QoS constraint is deadline feasibility and the objective is to minimize the energy used. They proposed an online speed scaling algorithm Average Rate (AVR) that runs each job at a constant speed between its release and its deadline. They showed that the competitive ratio of AVR is at most (2α) α /2 if a processor running at speed s uses power s α . We show the competitive ratio of AVR is at least ((2−δ)α) α /2, where δ is a function of α that approaches zero as α approaches infinity. This shows that the competitive analysis of AVR by Yao, Demers, and Shenker is essentially tight, at least for large α. We also give an alternative proof that the competitive ratio of AVR is at most (2α) α /2 using a potential function argument. We believe that this analysis is significantly simpler and more elementary than the original analysis of AVR in Yao et al. (Proc. IEEE Symp. Foundations of Computer Science, pp. 374–382, 1995).  相似文献   

5.
Murray’s law which is related to the bifurcations of vascular blood vessels states that the cube of a parent vessel’s diameter equals the sum of the cubes of the daughter vessels’ diameters D03 = D13 + D23 , a = D03 /( D13 + D23 ) = 1, D_{0}^{3} = D_{1}^{3} + D_{2}^{3} ,\,\alpha = D_{0}^{3} /\left( {D_{1}^{3} + D_{2}^{3} } \right) = 1, where D 0, D 1, and D 2 are the diameters of the parent and two daughter vessels, respectively and α is the ratio). The structural characteristics of the vessels are crucial in the development of the cardiovascular system as well as for the proper functioning of an organism. In order to understand the vascular circulation system, it is essential to understand the design rules or scaling laws of the system under a homeostatic condition. In this study, Murray’s law in the extraembryonic arterial bifurcations and its relationship with the bifurcation angle (θ) using 3-day-old chicken embryos in vivo has been investigated. Bifurcation is an important geometric factor in biological systems, having a significant influence on the circulation in the vascular system. Parameters such as diameter and bifurcation angle of all the 140 vessels tested were measured using image analysis softwares. The experimental results for α (= 1.053 ± 0.188) showed a good agreement with the ratio of 1 for Murray’s law. Furthermore, the diameter relation α approached the theoretical value of 1 as the diameter of parent vessel D 0 decreased below 100 μm. The bifurcation angle θ decreased as D 0 increased and vice versa. For the arterial bifurcations of chicken embryos tested in this study, the bifurcation pattern appears to be symmetric (D 1 = D 2). The bifurcation angle exhibited a nearly constant value of 77°, close to the theoretical value of 75° for a symmetric bifurcation.  相似文献   

6.
We introduce efficient margin-based algorithms for selective sampling and filtering in binary classification tasks. Experiments on real-world textual data reveal that our algorithms perform significantly better than popular and similarly efficient competitors. Using the so-called Mammen-Tsybakov low noise condition to parametrize the instance distribution, and assuming linear label noise, we show bounds on the convergence rate to the Bayes risk of a weaker adaptive variant of our selective sampler. Our analysis reveals that, excluding logarithmic factors, the average risk of this adaptive sampler converges to the Bayes risk at rate N −(1+α)(2+α)/2(3+α) where N denotes the number of queried labels, and α>0 is the exponent in the low noise condition. For all $\alpha>\sqrt{3}-1\approx0.73$\alpha>\sqrt{3}-1\approx0.73 this convergence rate is asymptotically faster than the rate N −(1+α)/(2+α) achieved by the fully supervised version of the base selective sampler, which queries all labels. Moreover, for α→∞ (hard margin condition) the gap between the semi- and fully-supervised rates becomes exponential.  相似文献   

7.
We give three results related to online nonclairvoyant speed scaling to minimize total flow time plus energy. We give a nonclairvoyant algorithm LAPS, and show that for every power function of the form P(s)=s α , LAPS is O(1)-competitive; more precisely, the competitive ratio is 8 for α=2, 13 for α=3, and \frac2a2lna\frac{2\alpha^{2}}{\ln\alpha} for α>3. We then show that there is no constant c, and no deterministic nonclairvoyant algorithm A, such that A is c-competitive for every power function of the form P(s)=s α . So necessarily the achievable competitive ratio increases as the steepness of the power function increases. Finally we show that there is a fixed, very steep, power function for which no nonclairvoyant algorithm can be O(1)-competitive.  相似文献   

8.
We show that subclasses of q-ary separable Goppa codes Γ(L, G) with L = {α ∈ GF(q 2ℓ): G(α) ∈ 0} and with special Goppa polynomials G(x) can be represented as a chain of equivalent and embedded codes. For all codes of the chain we obtain an improved bound for the dimension and find an exact value of the minimum distance. A chain of binary codes is considered as a particular case with specific properties.  相似文献   

9.
It was recently found that dark energy in the form of phantom generalized Chaplygin gas may lead to a new form of a cosmic doomsday, the Big Freeze singularity. Like the Big Rip singularity, the Big Freeze singularity would also take place at finite future cosmic time, but, unlike the Big Rip, it happens for a finite scale factor. Our goal is to test if a universe filled with phantom generalized Chaplygin gas can conform to the data of astronomical observations. We shall see that if the universe is only filled with generalized phantom Chaplygin gas with the equation of state p = −c 2 s 2/ρ α with α < −1, then such a model cannot be matched to the observational data; generally speaking, such a universe has an infinite age. To construct more realistic models, one actually need to add dark matter. This procedure results in cosmological scenarios which do not contradict the values of universe age and expansion rate and allow one to estimate how long we are now from the future Big Freeze doomsday.  相似文献   

10.
We consider random graphs, and their extensions to random structures, with edge probabilities of the form β n α , where n is the number of vertices, α,β are fixed and α>1 (α>arity−1 for structures of higher arity). We consider conjunctive properties over these random graphs, and investigate the problem of computing their asymptotic conditional probabilities. This provides us a novel approach to dealing with uncertainty in databases, with applications to data privacy and other database problems. This work was partially supported by the grants NSF SEIII 0513877, NSF 61-2252, and NSF IIS-0428168.  相似文献   

11.
To construct the asymptotically optimum plan of the p-index axial assignment problem of order n, p algorithms α0, α1, ..., α p−1 with complexities equal to O(np+1), O(np), ..., O(n2) operations, respectively, are proposed and substantiated under some additional conditions imposed on the coefficients of the objective function. __________ Translated from Kibernetika i Sistemnyi Analiz, No. 6, pp. 176–181, November–December 2005.  相似文献   

12.
A modified fast approximation algorithm for the 0-1 knapsack problem with improved complexity is presented, based on the schemes of Ibarra, Kim and Babat. By using a new partition of items, the algorithm solves the n -item 0-1 knapsack problem to any relative error tolerance ε > 0 in the scaled profit space P * /K = O ( 1/ ε 1+δ ) with time O(n log(1/ ε )+1/ ε^{2+2δ}) and space O(n +1/ ɛ^{2+δ}), where P^{*} and b are the maximal profit and the weight capacity of the knapsack problem, respectively, K is a problem-dependent scaling factor, δ={α}/(1+α) and α=O( log b ). This algorithm reduces the exponent of the complexity bound in [5].  相似文献   

13.
1 Introduction Artificial neural networks have been extensively applied in various fields of science and engineering. Why is so is mainly because the feedforward neural networks (FNNs) have the universal approximation capability[1-9]. A typical example of…  相似文献   

14.
The longest common subsequence problem (LCS) and the closest substring problem (CSP) are two models for finding common patterns in strings, and have been studied extensively. Though both LCS and CSP are NP-Hard, they exhibit very different behavior with respect to polynomial time approximation algorithms. While LCS is hard to approximate within n δ for some δ>0, CSP admits a polynomial time approximation scheme. In this paper, we study the longest common rigid subsequence problem (LCRS). This problem shares similarity with both LCS and CSP and has an important application in motif finding in biological sequences. We show that it is NP-hard to approximate LCRS within ratio n δ , for some constant δ>0, where n is the maximum string length. We also show that it is NP-Hard to approximate LCRS within ratio Ω(m), where m is the number of strings.  相似文献   

15.
We study how to attack through different techniques a perfect fluid Bianchi I model with variable G and Λ. These tactics are: Lie group method (LM), imposing a particular symmetry, self-similarity (SS), matter collineations (MC), and kinematic self-similarity (KSS). We compare the tactics since they are quite similar (symmetry principles). We arrive at the conclusion that the LM is too restrictive and yields only a flat FRW solution with G = const and Λ = 0. The SS, MC, and KSS approaches bring us to a solution where G is a decreasing time function and Λ ∼ t −2, its sign depending on the equation of state while the exponents of the scale factors must satisfy the conditions Σ i=13 α i = 1 and Σ i=13 α i 2 < 1, ∀ω, i.e., the solution is valid for all equations of state, relaxing in this way the Kasner conditions.  相似文献   

16.
A. Ossicini  F. Rosati 《Calcolo》1979,16(4):371-381
Sommaire Dans cette note on donne des théorèmes généraux qui permettent la comparaison entre les zéros des polyn?mess-orthogonaux, relatifs à fonctions poidsp x (x),p 0 (x), pour les quelles le rapportp 1 (x)/p o(x) est une fonction strictement monotone sur un certain intervalle. Ces thèorèmes, utilisés à propos, permettent d'établir des inégalités entre les zéros des polynomess-orthogonaux relatifs aux fonctions poids de type Jacobi,p(x)=(1-x)α (1+x)β), α, β>−1,x∈]−1,1[; inégalités qui sont connues seulement pour des valeurs particuliers de α,β,s, et du degrém du polynome.  相似文献   

17.
M. Drmota 《Algorithmica》2001,31(3):304-317
It is shown that the number of leftist trees of size n in a simply generated family of trees is asymptotically given by \sim α c n n -3/2 with certain constants α>0 and c >1 . Furthermore, it is proved that the number of leaves in leftist trees with n nodes satisfies a central limit theorem. Received June 6, 2000; revised July 14, 2000.  相似文献   

18.
We design an adiabatic quantum algorithm for the counting problem, i.e., approximating the proportion, α, of the marked items in a given database. As the quantum system undergoes a designed cyclic adiabatic evolution, it acquires a Berry phase 2πα. By estimating the Berry phase, we can approximate α, and solve the problem. For an error bound e{\epsilon}, the algorithm can solve the problem with cost of order (\frac1e)3/2{(\frac{1}{\epsilon})^{3/2}}, which is not as good as the optimal algorithm in the quantum circuit model, but better than the classical random algorithm. Moreover, since the Berry phase is a purely geometric feature, the result may be robust to decoherence and resilient to certain noise.  相似文献   

19.
Singularities in the dark energy late universe are discussed under the assumption that the Lagrangian contains the Einstein term R plus a modified gravity term R α , where α is a constant. The 4D fluid is taken to be viscous and composed of two components, one Einstein component where the bulk viscosity is proportional to the scalar expansion ϑ, and another modified component where the bulk viscosity is proportional to the power ϑ2 α−1. Under these conditions, it is known from earlier work that the bulk viscosity can drive the fluid from the quintessence region (w > −1) into the phantomregion (w< −1), where w is the thermodynamical parameter [I. Brevik, Gen. Rel. Grav. 38, 1317 (2006)]. We combine this 4D theory with the 5D Randall-Sundrum II theory in which there is a single spatially flat brane situated at y = 0. We find that the Big Rip singularity, which occurs in 4D theory if α > 1/2, carries over to the 5D metric in the bulk, |y| > 0. The present investigation generalizes that of an earlier paper [I. Brevik, Eur. Phys. J. C 56, 579 (2008)] in which only a one-component modified fluid was present.  相似文献   

20.
We consider the setting of a multiprocessor where the speeds of the m processors can be individually scaled. Jobs arrive over time and have varying degrees of parallelizability. A nonclairvoyant scheduler must assign the processes to processors, and scale the speeds of the processors. We consider the objective of energy plus flow time. We assume that a processor running at speed s uses power s α for some constant α>1. For processes that may have side effects or that are not checkpointable, we show an W(m(a-1)/a2)\Omega(m^{(\alpha -1)/\alpha^{2}}) bound on the competitive ratio of any randomized algorithm. For checkpointable processes without side effects, we give an O(log m)-competitive algorithm. Thus for processes that may have side effects or that are not checkpointable, the achievable competitive ratio grows quickly with the number of processors, but for checkpointable processes without side effects, the achievable competitive ratio grows slowly with the number of processors. We then show a lower bound of Ω(log 1/α m) on the competitive ratio of any randomized algorithm for checkpointable processes without side effects.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号