首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 390 毫秒
1.
《Computers & Fluids》2006,35(8-9):849-854
In the special case of relaxation parameter ω = 1 lattice Boltzmann schemes for (convection) diffusion and fluid flow are equivalent to finite difference/volume (FD) schemes, and are thus coined finite Boltzmann (FB) schemes. We show that the equivalence is inherent to the homology of the Maxwell–Boltzmann constraints for the equilibrium distribution, and the constraints for finite difference stencils as derived from Taylor series expansion. For convection–diffusion we analyse the equivalence between FB and the Lax–Wendroff FD scheme in detail. It follows that the Lax–Wendroff procedure is performed automatically in the finite Boltzmann schemes via the imposed Maxwell–Boltzmann constraints. Furthermore, we make some remarks on FB schemes for fluid flows, and show that an earlier related study can be extended to rectangular grids. Finally, our findings are briefly checked with simulations of natural convection in a differentially heated square cavity.  相似文献   

2.
3.
Dicumyl peroxide (DCPO), is produced by cumene hydroperoxide (CHP) process, is utilized as an initiator for polymerization, a prevailing source of free radicals, a hardener, and a linking agent. DCPO has caused several thermal explosion and runaway reaction accidents in reaction and storage zone in Taiwan because of its unstable reactive property. Differential scanning calorimetry (DSC) was used to determine thermokinetic parameters including 700 J g–1 of heat of decomposition (ΔHd), 110 °C of exothermic onset temperature (T0), 130 kJ mol–1 of activation energy (Ea), etc., and to analyze the runaway behavior of DCPO in a reaction and storage zone. To evaluate thermal explosion of DCPO with storage equipment, solid thermal explosion (STE) and liquid thermal explosion (LTE) of thermal safety software (TSS) were applied to simulate storage tank under various environmental temperatures (Te). Te exceeding the T0 of DCPO can be discovered as a liquid thermal explosion situation. DCPO was stored under room temperature without sunshine and was prohibited exceeding 67 °C of self-accelerating decomposition temperature (SADT) for a tank (radius = 1 m and height = 2 m). SADT of DCPO in a box (width, length and height = 1 m, respectively) was determined to be 60 °C. The TSS was employed to simulate the fundamental thermal explosion behavior in a large tank or a drum. Results from curve fitting demonstrated that, even at the earlier stage of the reaction in the experiments, ambient temperature could elicit exothermic reactions of DCPO. To curtail the extent of the risk, relevant hazard information is quite significant and must be provided in the manufacturing process.  相似文献   

4.
In the present study the motion of isothermal circular particles in a two-dimensional vertical channel with hot and cold isothermal conditions at the left and right walls in the presence of thermal convection was investigated. An isothermal circular particle for a particle to fluid density ratio ρr = ρpf = of 1.00232 where ρp and ρf denote the particle and the fluid densities, respectively, was considered. Numerical simulations were carried out using the direct forcing/fictitious domain (DF/FD) method to investigate the solid motion in a fluid with a Prandtl number of 0.7 for different Grashof numbers ranging from 0 to 50. Under the conditions of the present problem, the particle motion is mainly governed by the thermal convection between the side walls of the channel and the particle, and by the wall confinement. The results of the present study indicate that three regimes of particle behavior can be identified in the present range of Grashof numbers regardless of the cold and hot thermal boundary conditions of the particle. In the first regime, the particle exhibits steady settling behavior; in the second regime, it undergoes a transient overshoot before the steady settling; in the third regime, the particle motion is submerged in the thermal levitation.  相似文献   

5.
《Parallel Computing》2014,40(5-6):144-158
One of the main difficulties using multi-point statistical (MPS) simulation based on annealing techniques or genetic algorithms concerns the excessive amount of time and memory that must be spent in order to achieve convergence. In this work we propose code optimizations and parallelization schemes over a genetic-based MPS code with the aim of speeding up the execution time. The code optimizations involve the reduction of cache misses in the array accesses, avoid branching instructions and increase the locality of the accessed data. The hybrid parallelization scheme involves a fine-grain parallelization of loops using a shared-memory programming model (OpenMP) and a coarse-grain distribution of load among several computational nodes using a distributed-memory programming model (MPI). Convergence, execution time and speed-up results are presented using 2D training images of sizes 100 × 100 × 1 and 1000 × 1000 × 1 on a distributed-shared memory supercomputing facility.  相似文献   

6.
The lattice Boltzmann method (LBM) and traditional finite difference methods have separate strengths when solving the incompressible Navier–Stokes equations. The LBM is an explicit method with a highly local computational nature that uses floating-point operations that involve only local data and thereby enables easy cache optimization and parallelization. However, because the LBM is an explicit method, smaller grid spacing requires smaller numerical time steps during both transient and steady state computations. Traditional implicit finite difference methods can take larger time steps as they are not limited by the CFL condition, but only by the need for time accuracy during transient computations. To take advantage of the strengths of both methods, a multiple solver, multiple grid block approach was implemented and validated for the 2-D Burgers’ equation in Part I of this work. Part II implements the multiple solver, multiple grid block approach for the 2-D backward step flow problem. The coupled LBM–VSM solver is found to be faster by a factor of 2.90 (2.87 and 2.93 for Re = 150 and Re = 500, respectively) on a single processor than the VSM for the 2-D backward step flow problem while maintaining similar accuracy.  相似文献   

7.
Tetrazino-tetrazine-tetraoxide (TTTO) is an attractive high energy compound, but unfortunately, it is not yet experimentally synthesized so far. Isomerization of TTTO leads to its five isomers, bond-separation energies were empolyed to compare the global stability of six compounds, it is found that isomer 1 has the highest bond-separation energy (1204.6 kJ/mol), compared with TTTO (1151.2 kJ/mol); thermodynamic properties of six compounds were theoretically calculated, including standard formation enthalpies (solid and gaseous), standard fusion enthalpies, standard vaporation enthalpies, standard sublimation enthalpies, lattice energies and normal melting points, normal boiling points; their detonation performances were also computed, including detonation heat (Q, cal/g), detonation velocity (D, km/s), detonation pressure (P, GPa) and impact sensitivity (h50, cm), compared with TTTO (Q = 1311.01 J/g, D = 9.228 km/s, P = 40.556 GPa, h50 = 12.7 cm), isomer 5 exhibites better detonation performances (Q = 1523.74 J/g, D = 9.389 km/s, P = 41.329 GPa, h50 =  28.4 cm).  相似文献   

8.
A blue organic light-emitting device, based on an iridium phosphorescent dopant in a polyvinylcarbazole host, has been modified by the addition of an external CaS:Eu inorganic phosphor layer. By incorporating a surfactant in the phosphor mixture, a uniform coating could be achieved by drop-casting. The resulting hybrid device exhibited white light emission, with Commission Internationale de l’Eclairage, CIE (x, y) coordinates of x = 0.32, y = 0.35. No significant change in these coordinates was observed for current densities in the range 25–510 A m?2. The maximum power efficiencies of the white device was 2.3 lm W?1 at a brightness of 254 cd m?2.  相似文献   

9.
A three-dimensional parallel unstructured non-nested multigrid solver for solutions of unsteady incompressible viscous flow is developed and validated. The finite-volume Navier–Stokes solver is based on the artificial compressibility approach with a high-resolution method of characteristics-based scheme for handling convection terms. The unsteady flow is calculated with a matrix-free implicit dual time stepping scheme. The parallelization of the multigrid solver is achieved by multigrid domain decomposition approach (MG-DD), using single program multiple data (SPMD) and multiple instruction multiple data (MIMD) programming paradigm. There are two parallelization strategies proposed in this work, first strategy is a one-level parallelization strategy using geometric domain decomposition technique alone, second strategy is a two-level parallelization strategy that consists of a hybrid of both geometric domain decomposition and data decomposition techniques. Message-passing interface (MPI) and OpenMP standard are used to communicate data between processors and decompose loop iterations arrays, respectively. The parallel-multigrid code is used to simulate both steady and unsteady incompressible viscous flows over a circular cylinder and a lid-driven cavity flow. A maximum speedup of 22.5 could be achieved on 32 processors, for instance, the lid-driven cavity flow of Re = 1000. The results obtained agree well with numerical solutions obtained by other researchers as well as experimental measurements. A detailed study of the time step size and number of pseudo-sub-iterations per time step required for simulating unsteady flow are presented in this paper.  相似文献   

10.
In manufacturing industries, it is well known that process variation is a major source of poor quality products. As such, monitoring and diagnosis of variation is essential towards continuous quality improvement. This becomes more challenging when involving two correlated variables (bivariate), whereby selection of statistical process control (SPC) scheme becomes more critical. Nevertheless, the existing traditional SPC schemes for bivariate quality control (BQC) were mainly designed for rapid detection of unnatural variation with limited capability in avoiding false alarm, that is, imbalanced monitoring performance. Another issue is the difficulty in identifying the source of unnatural variation, that is, lack of diagnosis, especially when dealing with small shifts. In this research, a scheme to address balanced monitoring and accurate diagnosis was investigated. Design consideration involved extensive simulation experiments to select input representation based on raw data and statistical features, artificial neural network recognizer design based on synergistic model, and monitoring–diagnosis approach based on two-stage technique. The study focused on bivariate process for cross correlation function, ρ = 0.1–0.9 and mean shifts, μ = ±0.75–3.00 standard deviations. The proposed two-stage intelligent monitoring scheme (2S-IMS) gave superior performance, namely, average run length, ARL1 = 3.18–16.75 (for out-of-control process), ARL0 = 335.01–543.93 (for in-control process) and recognition accuracy, RA = 89.5–98.5%. This scheme was validated in manufacturing of audio video device component. This research has provided a new perspective in realizing balanced monitoring and accurate diagnosis in BQC.  相似文献   

11.
《Information and Computation》2007,205(7):1078-1095
Assume that G = (V, E) is an undirected graph, and C  V. For every v  V, denote Ir(G; v) = {u  C: d(u,v)  r}, where d(u,v) denotes the number of edges on any shortest path from u to v in G. If all the sets Ir(G; v) for v  V are pairwise different, and none of them is the empty set, the code C is called r-identifying. The motivation for identifying codes comes, for instance, from finding faulty processors in multiprocessor systems or from location detection in emergency sensor networks. The underlying architecture is modelled by a graph. We study various types of identifying codes that are robust against six natural changes in the graph; known or unknown edge deletions, additions or both. Our focus is on the radius r = 1. We show that in the infinite square grid the optimal density of a 1-identifying code that is robust against one unknown edge deletion is 1/2 and the optimal density of a 1-identifying code that is robust against one unknown edge addition equals 3/4 in the infinite hexagonal mesh. Moreover, although it is shown that all six problems are in general different, we prove that in the binary hypercube there are cases where five of the six problems coincide.  相似文献   

12.
We present a validation strategy for enhancement of an unstructured industrial finite-volume solver designed for steady RANS problems for large-eddy-type simulation with near-wall modelling of incompressible high Reynolds number flow. Different parts of the projection-based discretisation are investigated to ensure LES capability of the numerical method. Turbulence model parameters are calibrated by using a minimisation of least-squares functionals for first and second order statistics of the basic benchmark problems decaying homogeneous turbulence and turbulent channel flow. Then the method is applied to the flow over a backward facing step at Reh = 37,500. Of special interest is the role of the spatial and temporal discretisation error for low order schemes. For wall-bounded flows, present results confirm existing best practice guidelines for mesh design. For free-shear layers, a sensor to quantify the resolution quality of the LES based on the resolved turbulent kinetic energy is presented and applied to the flow over a backward facing step at Reh = 37,500.  相似文献   

13.
We perform a stability and convergence analysis of sequential methods for coupled flow and geomechanics, in which the mechanics sub-problem is solved first. We consider slow deformations, so that inertia is negligible and the mechanical problem is governed by an elliptic equation. We use Biot’s self-consistent theory to obtain the classical parabolic-type flow problem. We use a generalized midpoint rule (parameter α between 0 and 1) time discretization, and consider two classical sequential methods: the drained and undrained splits.The von Neumann method provides sharp stability estimates for the linear poroelasticity problem. The drained split with backward Euler time discretization (α = 1) is conditionally stable, and its stability depends only on the coupling strength, and it is independent of time step size. The drained split with the midpoint rule (α = 0.5) is unconditionally unstable. The mixed time discretization, with α = 1.0 for mechanics and α = 0.5 for flow, has the same stability properties as the backward Euler scheme. The von Neumann method indicates that the undrained split is unconditionally stable when α ? 0.5.We extend the stability analysis to the nonlinear regime (poro-elastoplasticity) via the energy method. It is well known that the drained split does not inherit the contractivity property of the continuum problem, thereby precluding unconditional stability. For the undrained split we show that it is B-stable (therefore unconditionally stable at the algorithmic level) when α ? 0.5.We also analyze convergence of the drained and undrained splits, and derive the a priori error estimates from matrix algebra and spectral analysis. We show that the drained split with a fixed number of iterations is not convergent even when it is stable. The undrained split with a fixed number of iterations is convergent for a compressible system (i.e., finite Biot modulus). For a nearly-incompressible system (i.e., very large Biot modulus), the undrained split loses first-order accuracy, and becomes non-convergent in time.We also study the rate of convergence of both splits when they are used in a fully-iterated sequential scheme. When the medium permeability is high or the time step size is large, which corresponds to a high diffusion of pressure, the error amplification of the drained split is lower and therefore converges faster than the undrained split. The situation is reversed in the case of low permeability and small time step size.We provide numerical experiments supporting all the stability and convergence estimates of the drained and undrained splits, in the linear and nonlinear regimes. We also show that our spatial discretization (finite volumes for flow and finite elements for mechanics) removes the well-documented spurious instability in consolidation problems at early times.  相似文献   

14.
We study the primary decomposition of lattice basis ideals. These ideals are binomial ideals with generators given by the elements of a basis of a saturated integer lattice. We show that the minimal primes of such an ideal are completely determined by the sign pattern of the basis elements, while the embedded primes are not. As a special case we examine the ideal generated by the 2  ×  2 adjacent minors of a generic m × n matrix. In particular, we determine all minimal primes in the 3  × n case. We also present faster ways of computing a generating set for the associated toric ideal from a lattice basis ideal.  相似文献   

15.
In this work, the suitability of the lattice Boltzmann method is evaluated for the simulation of subcritical turbulent flows around a sphere. Special measures are taken to reduce the computational cost without sacrificing the accuracy of the method. A large eddy simulation turbulence model is employed to allow efficient simulation of resolved flow structures on non-uniform computational meshes. In the vicinity of solid walls, where the flow is governed by the presence of a thin boundary layer, local grid-refinement is employed in order to capture the fine structures of the flow. In the test case considered, reference values for the drag force in the Reynolds number range from 2000 to 10 000 and for the surface pressure distribution and the angle of separation at a Reynolds number of 10 000 could be quantitatively reproduced. A parallel efficiency of 80% was obtained on an Opteron cluster.  相似文献   

16.
This paper presents a residual-based turbulence model for the incompressible Navier–Stokes equations. The method is derived employing the variational multiscale (VMS) framework. A multiscale decomposition of the continuous solution and a priori unique decomposition of the admissible spaces of functions lead to two coupled nonlinear problems termed as the coarse-scale and the fine-scale sub-problems. The fine-scale velocity field is assumed to be nonlinear and time-dependent and is modeled via the bubble functions approach applied directly to the fine-scale sub-problem. A significant contribution in this paper is a systematic and consistent derivation of the fine-scale variational operator, commonly termed as the stabilization tensor that possesses the right order in the advective and diffusive limits, and variationally projects the fine-scale solution onto the coarse-scale space. A direct treatment of the fine-scale problem via bubble functions offers several fine-scale approximation options with varying degrees of mathematical sophistication that are investigated via benchmark problems. Numerical accuracy of the proposed method is shown on a forced-isotropic turbulence problem, statistically stationary turbulent channel flow problems at ReT = 395 and 590, and non-equilibrium turbulent flow around a cylinder at Re = 3,900.  相似文献   

17.
A hybrid computational system, composed of the finite element method (FEM) and cascade neural network system (CNNs), is applied to the identification of three geometrical parameters of elastic arches, i.e. span l, height f and cross-sectional thickness h. FEM is used in the direct (forward) analysis, which corresponds to the mapping α = {l, f, h}  {ωj}, where: α – vector of control parameters, ωj – arch eigenfrequencies. The reverse analysis is related to the identification procedure in which the reverse mapping is performed {ωj}  {αi}. For the identification purposes a recurrent, three level CNNs of structure (Dk-Hk-1)s was formulated, where: k – recurrence step, s = I, II, III-levels of cascade system. The Semi-Bayesian approach is introduced for the design of CNNs applying the MML Maximum Marginal Likelihood) criterion. The computation of hyperparameters is performed by means of the Bayesian procedure evidence. The numerical analysis proves a great numerical efficiency of the proposed hybrid approach for both the perfect (noiseless) values of eigenfrequencies and noisy ones simulated by an added artificial noise.  相似文献   

18.
The development of a thermal switch based on arrays of liquid–metal micro-droplets is presented. Prototype thermal switches are assembled from a silicon substrate on which is deposited an array of 1600 30-μm liquid–metal micro-droplets. The liquid–metal micro-droplet array makes and breaks contact with a second bare silicon substrate. A gap between the two silicon substrates is filled with either air at 760 Torr, air at of 0.5 Torr or xenon at 760 Torr. Heat transfer and thermal resistance across the thermal switches are measured for “on” (make contact) and “off” (break contact) conditions using guard-heated calorimetry. The figure of merit for a thermal switch, the ratio of “off” state thermal resistance over “on” state thermal resistance, Roff/Ron, is 129 ± 43 for a xenon-filled thermal switch that opens 100 μm and 60 ± 17 for an 0.5 Torr air-filled thermal switch that opens 25 μm. These thermal resistance ratios are shown to be markedly higher than values of Roff/Ron for a thermal switch based on contact between polished silicon surfaces. Transient temperature measurements for the liquid–metal micro-droplet switches indicate thermal switching times of less than 100 ms. Switch lifetimes are found to exceed one-million cycles.  相似文献   

19.
The implicit Colebrook–White equation has been widely used to estimate the friction factor for turbulent fluid-flow in rough-pipes. In this paper, the state-of-the-art review for the most currently available explicit alternatives to the Colebrook–White equation, is presented. An extensive comparison test was established on the 20 × 500 grid, for a wide range of relative roughness (ε/D) and Reynolds number (R) values (1 × 10?6 ? ε/D ? 5 × 10?2; 4 × 103 ? R ? 108), covering a large portion of turbulent flow zone in Moody’s diagram. Based on the comprehensive error analysis, the magnitude points in which the maximum absolute and the maximum relative error are occurred at the pair of ε/D and R values, are observed. A limiting case of the most of these approximations provided friction factor estimates that are characterized by a mean absolute error of 5 × 10?4, a maximum absolute error of 4 × 10?3 whereas, a mean relative error of 1.3% and a maximum relative error of 5.8%, over the entire range of ε/D and R values, respectively. For practical purposes, the complete results for the maximum and the mean relative errors versus the 20 sets of ε/D value, are also indicated in two comparative figures. The examination results for error properties of these approximations gives one an opportunity to practically evaluate the most accurate formula among of all the previous explicit models; and showing in this way its great flexibility for estimating turbulent flow friction factor. Comparative analysis for the mean relative error profile revealed, the classification for the best-fitted six equations examined was in a good agreement with those of the best model selection criterion claimed in the recent literature, for all performed simulations.  相似文献   

20.
It is shown that the photonic crystal slab (PCS) with hexagonal air holes has band gaps in the guided mode spectrum, which can be compared to that of the PCS with circular air holes, thus it is also a good candidate to be used for the PC devices. The PC with hexagonal air holes and a = 0.5 μm and r = 0.15 μm was fabricated successfully by selective area metal organic vapor phase epitaxy (SA-MOVPE). The vertical and smooth sidewalls are formed and the uniformity is very good. The same process was also used to fabricate a hexagonal air hole array with the width of 0.1 μm successfully. The air-bridge PCS with hexagonal air holes and a = 0.3 μm and r = 0.09 μm was also fabricated successfully by SA-MOVPE. Further optimization of the growth conditions for the sacrificial layer and the selective etching of the GaAs cap layer is also needed. Our experimental results indicate that SA-MOVPE is a promising method for fabricating PC devices and photonic nanostructures.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号