首页 | 官方网站   微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   15篇
  免费   0篇
工业技术   15篇
  2019年   1篇
  2015年   1篇
  2014年   2篇
  2012年   1篇
  2011年   1篇
  2010年   1篇
  2007年   2篇
  2006年   2篇
  2002年   2篇
  1999年   1篇
  1996年   1篇
排序方式: 共有15条查询结果,搜索用时 78 毫秒
1.
The strain response of a polarised PZT was characterised using Digital Image Correlation (DIC). The DIC algorithm is based on a global approach and regularises the displacement field using the balance equations of solid mechanics. The measurement error is reduced by correcting the displacements that are mechanically not admissible. A ferroelectric test showed that the standard deviations of strain fields remain mostly under 1.2 × 10?4 for an element size of 64 px. The standard deviation on the average value is 3 × 10?6. The strain field is homogeneous and its average value is consistent with the strain obtained from a CCD laser measurement device. The longitudinal strain, transverse strain and polarisation response of the PZT were measured for bipolar and unipolar loadings ranging from 50 to 5000 V/mm. Material properties were extracted from these measurements. This work shows the advantages of a novel 2D-DIC algorithm for piezoelectric strain characterisation.  相似文献   
2.
The problem of bivariate density estimation is studied with the aim of finding the density function with the smallest number of local extreme values which is adequate with the given data. Adequacy is defined via Kuiper metrics. The concept of the taut-string algorithm which provides adequate approximations with a small number of local extrema is generalised for analysing two- and higher dimensional data, using Delaunay triangulation and diffusion filtering. Results are based on equivalence relations in one dimension between the taut-string algorithm and the method of solving the discrete total variation flow equation. The generalisation and some modifications are developed and the performance for density estimation is shown.  相似文献   
3.
ISEC (Insulin SECretion) is a computer program which calculates pre-hepatic insulin secretion from plasma C-peptide measurements. The program uses a regression (population) model to derive parameters of C-peptide kinetics from subject's gender, type (normal, obese, non-insulin dependent diabetes mellitus), age, weight, and height. Insulin secretion is calculated as a piece-wise constant (step) function with flexible step length allowing for a fine resolution of the secretion profile between measurements. A constrained regularisation method of deconvolution is employed to carry out the calculations. The calculated profile satisfies three properties: (i) it fits the measurement within the given level of the measurement error, (ii) it is non-negative, and (iii) it has a minimum value of a regularisation criterion (norm of second differences) which quantifies the degree of deviation of the secretion profile from a straight line. Both theoretical aspects and specific features related to ISEC are considered. To exemplify the use of ISEC, pre-hepatic insulin secretion is calculated during meal tolerance test, frequently sampled intravenous glucose tolerance test, hyperinsulinaemic euglycaemic glucose clamp, and basal conditions with frequent sampling.  相似文献   
4.
In this paper, an inverse geometric problem for the modified Helmholtz equation arising in heat conduction in a fin, which consists of determining an unknown inner boundary (rigid inclusion or cavity) of an annular domain from a single pair of boundary Cauchy data is solved numerically using the method of fundamental solutions (MFS). A nonlinear minimisation of the objective function is regularised when noise is added into the input boundary data. The stability of numerical results is investigated for several test examples.  相似文献   
5.
The Pseudo Fisher Linear Discriminant (PFLD) based on a pseudo-inverse technique shows a peaking behaviour of the generalisation error for training sample sizes that are about the feature size: with an increase in the training sample size, the generalisation error first decreases, reaching a minimum, then increases, reaching a maximum at the point where the training sample size is equal to the data dimensionality, and afterwards begins again to decrease. A number of ways exist to solve this problem. In this paper, it is shown that noise injection by adding redundant features to the data is similar to other regularisation techniques, and helps to improve the generalisation error of this classifier for critical training sample sizes. Received: 10 November 1998?Received in revised form: 7 January 1999?Accepted: 7 January 1999  相似文献   
6.
The Fuzzy C-Means (FCM) algorithm is a widely used and flexible approach to automated image segmentation, especially in the field of brain tissue segmentation from 3D MRI, where it addresses the problem of partial volume effects. In order to improve its robustness to classical image deterioration, namely noise and bias field artifacts, which arise in the MRI acquisition process, we propose to integrate into the FCM segmentation methodology concepts inspired by the non-local (NL) framework, initially defined and considered in the context of image restoration. The key algorithmic contributions of this article are the definition of an NL data term and an NL regularisation term to efficiently handle intensity inhomogeneities and noise in the data. The resulting new energy formulation is then built into an NL-FCM brain tissue segmentation algorithm. Experiments performed on both synthetic and real MRI data, leading to the classification of brain tissues into grey matter, white matter and cerebrospinal fluid, indicate a significant improvement in performance in the case of higher noise levels, when compared to a range of standard algorithms.  相似文献   
7.
传统的稀疏编码方法在遇到大规模数据时,因计算复杂度高而出现异常。针对这种异常导致不能很好地进行特征提取的问题,提出正则化双阶线性稀疏编码DLRSC(Double Linear Regularization Sparse Coding)方法。借助于广义多特征子空间框架来学习噪声和异常像素的结构特征,通过使用L1球理论,计算出唯一的近似解,并且利用滤波技巧避免了大规模数据的复杂计算,从而降低了时间及空间复杂度。最后,在ORL及Yale两大通用人脸数据库上的实验验证了所提的DLRSC方法的有效性,实验结果表明,相比其他几种最先进的稀疏编码方法,所提方法取得了更好的识别效果。  相似文献   
8.
Identification of prestress force from measured structural responses   总被引:2,自引:0,他引:2  
A method for the identification of prestress force of a prestressed concrete bridge deck is presented using the measured structural dynamic responses. A Euler–Bernoulli beam finite element model is used to represent the bridge deck, and the prestress force is modelled as the axial prestress force in each beam element. The state-space approach is used to calculate the dynamic responses of the structure and the sensitivities of dynamic responses with respect to the structural parameters, such as the prestress force, flexural rigidity, etc. The prestress force in each beam element is taken to be a system parameter, and it is expressed explicitly in the system equation for forward analysis. The prestress force in each element is identified using a sensitivity-based finite element model updating method in the inverse analysis. Data obtained from a single or multiple accelerometers or strain gauges are used in the identification. Both sinusoidal and impulsive excitations are illustrated to give very good results. Two numerical simulations are presented to illustrate the effectiveness and robustness of the proposed method. Laboratory work on an axially prestressed concrete beam is also included as a practical application.  相似文献   
9.
Space–time coding is an effective approach to increase the data rate and capacity of a wireless communication system that employs multiple transmit and multiple receive antennas. It involves coding techniques that are designed for multiple transmit antennas. We apply the analytical constant modulus algorithm (ACMA) for blind channel estimation (no use of training sequences) of space–time coded systems and explore the constant modulus (CM) constraints of the transmitted space–time codes (STCs). A regularised scheme that gives good performance in Gaussian and non-Gaussian noise environments is proposed. Computer simulation results in both single-user and multiuser cases show the improvement in error performance.  相似文献   
10.
Measurement-based modification of NURBS surfaces   总被引:1,自引:0,他引:1  
A frequent requirement in computer aided design and manufacture is to update or refine an existing CAD model using measured data. Least squares surface fitting is known to suffer from stability problems, caused by an insufficient measurement density in some regions. This is particularly evident in situations involving local surface updating and when knot insertion is applied for local surface refinement. This paper presents a new method to update the CAD model consisting of NURBS surfaces, trimmed or untrimmed, based on a set of unorganised measured points in three-dimensional space. The proposed method overcomes the fundamental problem of singular or ill-conditioned matrices resulting from incomplete data sets. This was achieved by introducing additional fitting criteria in the minimisation functional, which constrain the fitted surface in the regions with insufficient number of data points. Two main benefits were realised by this approach. First, local surface updating can be performed by treating the surface as a whole, without the need to specially identify the regions with insufficient data, nor to re-measure those regions. Second, the quality of the unmeasured regions may be controlled to suit specific needs. The results were found to be highly encouraging and the method was found to be especially useful in situations involving knot insertion and large surface deformations.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号