首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The fuzzy c-partition entropy approach for threshold selection is an effective approach for image segmentation. The approach models the image with a fuzzy c-partition, which is obtained using parameterized membership functions. The ideal threshold is determined by searching an optimal parameter combination of the membership functions such that the entropy of the fuzzy c-partition is maximized. It involves large computation when the number of parameters needed to determine the membership function increases. In this paper, a recursive algorithm is proposed for fuzzy 2-partition entropy method, where the membership function is selected as S-function and Z-function with three parameters. The proposed recursive algorithm eliminates many repeated computations, thereby reducing the computation complexity significantly. The proposed method is tested using several real images, and its processing time is compared with those of basic exhaustive algorithm, genetic algorithm (GA), particle swarm optimization (PSO), ant colony optimization (ACO) and simulated annealing (SA). Experimental results show that the proposed method is more effective than basic exhaustive search algorithm, GA, PSO, ACO and SA.  相似文献   

2.
The polynomial chaos (PC) method has been widely adopted as a computationally feasible approach for uncertainty quantification (UQ). Most studies to date have focused on non-stiff systems. When stiff systems are considered, implicit numerical integration requires the solution of a non-linear system of equations at every time step. Using the Galerkin approach the size of the system state increases from n to S × n, where S is the number of PC basis functions. Solving such systems with full linear algebra causes the computational cost to increase from O(n3) to O(S3n3). The S3-fold increase can make the computation prohibitive. This paper explores computationally efficient UQ techniques for stiff systems using the PC Galerkin, collocation, and collocation least-squares (LS) formulations. In the Galerkin approach, we propose a modification in the implicit time stepping process using an approximation of the Jacobian matrix to reduce the computational cost. The numerical results show a run time reduction with no negative impact on accuracy. In the stochastic collocation formulation, we propose a least-squares approach based on collocation at a low-discrepancy set of points. Numerical experiments illustrate that the collocation least-squares approach for UQ has similar accuracy with the Galerkin approach, is more efficient, and does not require any modification of the original code.  相似文献   

3.
This paper introduces the concept of T-reduction of binary fuzzy relations and establishes its properties. It presents an approach to providing the one-valued restoration of any boundary transitive and T-asymmetric binary fuzzy relation by its T-reduction. A theoretical basis for this approach is established, showing that T-reduction is the smallest fuzzy relation whose T-transitive closure coincides with the given binary fuzzy relation and where an (n × n) fuzzy relation on a finite universe with cardinality n can be represented by n − 1 values. This approach opens up an original way for constructing the membership functions of certain binary fuzzy relations. Important examples for the application of this concept are given.  相似文献   

4.
目的 针对现有区域合并和图割的结合算法没有考虑矿岩图像模糊特性,导致分割精度和运行效率较低,模糊边缘无法有效分割的问题,利用快速递推计算的最大模糊2-划熵信息设置以区域为顶点的图割模型似然能来解决。方法 首先利用双边滤波器和分水岭算法对矿岩图像进行预处理,并将其划分为若干一致性较好的区域;然后利用图像在计算最大模糊2-划分熵时,目标和背景的模糊隶属度函数来设计图割能量函数似然能,使得能量函数更接近模糊图像的真实情况,期间为了提高最大模糊2-划分熵值的搜索效率,提出了时间复杂度为O(n2)的递推算法将模糊熵的计算转化为递推过程,并保留不重复的递推结果用于后续的穷举搜索;最后利用设计的图割算法对区域进行标号,以完成分割。结果 本文算法的分割精度较其他区域合并和图割结合算法提高了约23%,分割后矿岩颗粒个数的统计结果相对于人工统计结果,其误差率约为2%,运行时间较其他算法缩短了约60%。结论 本文算法确保精度同时,有效提高矿岩图像的分割效率,为自动化矿岩图像高效分割的工程实践提供重要指导依据。  相似文献   

5.
In this paper, a new algorithm is developed to reduce the computational complexity of Ward’s method. The proposed approach uses a dynamic k-nearest-neighbor list to avoid the determination of a cluster’s nearest neighbor at some steps of the cluster merge. Double linked algorithm (DLA) can significantly reduce the computing time of the fast pairwise nearest neighbor (FPNN) algorithm by obtaining an approximate solution of hierarchical agglomerative clustering. In this paper, we propose a method to resolve the problem of a non-optimal solution for DLA while keeping the corresponding advantage of low computational complexity. The computational complexity of the proposed method DKNNA + FS (dynamic k-nearest-neighbor algorithm with a fast search) in terms of the number of distance calculations is O(N2), where N is the number of data points. Compared to FPNN with a fast search (FPNN + FS), the proposed method using the same fast search algorithm (DKNNA + FS) can reduce the computing time by a factor of 1.90-2.18 for the data set from a real image. In comparison with FPNN + FS, DKNNA + FS can reduce the computing time by a factor of 1.92-2.02 using the data set generated from three images. Compared to DLA with a fast search (DLA + FS), DKNNA + FS can decrease the average mean square error by 1.26% for the same data set.  相似文献   

6.
Type-2 fuzzy sets (T2 FSs) have been shown to manage uncertainty more effectively than T1 fuzzy sets (T1 FSs) in several areas of engineering [4], [6], [7], [8], [9], [10], [11], [12], [15], [16], [17], [18], [21], [22], [23], [24], [25], [26], [27] and [30]. However, computing with T2 FSs can require undesirably large amount of computations since it involves numerous embedded T2 FSs. To reduce the complexity, interval type-2 fuzzy sets (IT2 FSs) can be used, since the secondary memberships are all equal to one [21]. In this paper, three novel interval type-2 fuzzy membership function (IT2 FMF) generation methods are proposed. The methods are based on heuristics, histograms, and interval type-2 fuzzy C-means. The performance of the methods is evaluated by applying them to back-propagation neural networks (BPNNs). Experimental results for several data sets are given to show the effectiveness of the proposed membership assignments.  相似文献   

7.
Fast computation of sample entropy and approximate entropy in biomedicine   总被引:1,自引:0,他引:1  
Both sample entropy and approximate entropy are measurements of complexity. The two methods have received a great deal of attention in the last few years, and have been successfully verified and applied to biomedical applications and many others. However, the algorithms proposed in the literature require O(N2) execution time, which is not fast enough for online applications and for applications with long data sets. To accelerate computation, the authors of the present paper have developed a new algorithm that reduces the computational time to O(N3/2)) using O(N) storage. As biomedical data are often measured with integer-type data, the computation time can be further reduced to O(N) using O(N) storage. The execution times of the experimental results with ECG, EEG, RR, and DNA signals show a significant improvement of more than 100 times when compared with the conventional O(N2) method for N = 80,000 (N = length of the signal). Furthermore, an adaptive version of the new algorithm has been developed to speed up the computation for short data length. Experimental results show an improvement of more than 10 times when compared with the conventional method for N > 4000.  相似文献   

8.
This paper proposes a new fuzzy approach for the segmentation of images. L-interval-valued intuitionistic fuzzy sets (IVIFSs) are constructed from two L-fuzzy sets that corresponds to the foreground (object) and the background of an image. Here, L denotes the number of gray levels in the image. The length of the membership interval of IVIFS quantifies the influence of the ignorance in the construction of the membership function. Threshold for an image is chosen by finding an IVIFS with least entropy. Contributions also include a comparative study with ten other image segmentation techniques. The results obtained by each method have been systematically evaluated using well-known measures for judging the segmentation quality. The proposed method has globally shown better results in all these segmentation quality measures. Experiments also show that the results acquired from the proposed method are highly correlated to the ground truth images.  相似文献   

9.
In this paper it is shown that Winograd’s algorithm for computing convolutions and a fast, prime factor, discrete Fourier transform (DFT) algorithm can be modified to compute Fourier-like transforms of long sequences of 2m − 1 points over GF(2m), for 8 ? m ? 10. These new transform techniques can be used to decode Reed-Solomon (RS) codes of block length 2m − 1. The complexity of this new transform algorithm is reduced substantially from more conventional methods. A computer simulation verifies these new results.  相似文献   

10.
The boundary element method (BEM) is a popular method to solve various problems in engineering and physics and has been used widely in the last two decades. In high-order discretization the boundary elements are interpolated with some polynomial functions. These polynomials are employed to provide higher degrees of continuity for the geometry of boundary elements, and also they are used as interpolation functions for the variables located on the boundary elements. The main aim of this paper is to improve the accuracy of the high-order discretization in the two-dimensional BEM. In the high-order discretization, both the geometry and the variables of the boundary elements are interpolated with the polynomial function Pm, where m denotes the degree of the polynomial. In the current paper we will prove that if the geometry of the boundary elements is interpolated with the polynomial function Pm+1 instead of Pm, the accuracy of the results increases significantly. The analytical results presented in this work show that employing the new approach, the order of convergence increases from O(L0)m to O(L0)m+1 without using more CPU time where L0 is the length of the longest boundary element. The theoretical results are also confirmed by some numerical experiments.  相似文献   

11.
This paper presents an adaptive block sized reversible image watermarking scheme. A reversible watermarking approach recovers the original image from a watermarked image after extracting the embedded watermarks. Without loss of generality, the proposed scheme segments an image of size 2N × 2N adaptively to blocks of size 2L × 2L, where L starts from a user-defined number to 1, according to their block structures. If possible, the differences between central ordered pixel and other pixels in each block are enlarged to embed watermarks. The embedded quantity is determined by the largest difference in a block and watermarks are embedded into LSB bits of above differences. Experimental results show that the proposed adaptive block size scheme has higher capacity than conventional fixed block sized method.  相似文献   

12.
We introduce an efficient method for training the linear ranking support vector machine. The method combines cutting plane optimization with red-black tree based approach to subgradient calculations, and has O(ms + mlog (m)) time complexity, where m is the number of training examples, and s the average number of non-zero features per example. Best previously known training algorithms achieve the same efficiency only for restricted special cases, whereas the proposed approach allows any real valued utility scores in the training data. Experiments demonstrate the superior scalability of the proposed approach, when compared to the fastest existing RankSVM implementations.  相似文献   

13.
Zr4+- and Eu3+-codoped SrMg2(PO4)2 phosphors were prepared by conventional solid-state reaction. Under the excitation of ultraviolet light, the emission spectra of Sr0.95Eu0.05Mg2−2xZr2xP2O8 (x = 0.0005-0.07) are composed of a broad emission band peaking at 500 nm from Zr4+-emission and the characteristic emission lines from the 5D0 → 7FJ (J = 0, 1, 2, 3 and 4) transitions of Eu3+ ions. These phosphors show the long-lasting phosphorescence. The emission color varies from red to white with increasing Zr4+-content. The white-light emission is realized in single-phase phosphor of Sr0.95Eu0.05Mg2−2xZr2xP2O8 (x = 0.07) by combining the Zr4+- and Eu3+-emission. The duration of the persistent luminescence of Sr0.95Eu0.05Mg2−2xZr2xP2O8 (x = 0.07) reaches nearly 1.5 h. The time at which the long-lasting phosphorescence intensity is 50% of its original value (T0.5) is 410 s. The afterglow decay curves and the thermoluminescence spectra were measured to discuss this long-lasting phosphorescence phenomenon. The co-doped Zr4+ ions act as both the luminescence centers and trap-creating ions.  相似文献   

14.
In this paper, we study the m-pancycle-connectivity of a WK-Recursive network. We show that a WK-Recursive network with amplitude W and level L is strictly (5 × 2L−1 − 2)-pancycle-connected for W ? 3. That is, each pair of vertices in a WK-recursive network with amplitude greater than or equal to 3 resides in a common cycle of every length ranging from 5 × 2L−1 − 2 to N, where N is the size of the interconnection network; and the value 5 × 2L−1 − 2 reaches the lower bound of the problem.  相似文献   

15.
In this study, a stochastic process (X(t)), which describes a fuzzy inventory model of type (s, S) is considered. Under some weak assumptions, the ergodic distribution of the process X(t) is expressed by a fuzzy renewal function U(x). Then, membership function of the fuzzy renewal function U(x) is obtained when the amount of demand has a Gamma distribution with fuzzy parameters. Finally, membership function and alpha cuts of fuzzy ergodic distribution of this process is derived by using extension principle of L. Zadeh.  相似文献   

16.

Context

Test-driven development is an approach to software development, where automated tests are written before production code in highly iterative cycles. Test-driven development attracts attention as well as followers in professional environment; however empirical evidence of its superiority regarding its effect on productivity, code and tests compared to test-last development is still fairly limited. Moreover, it is not clear if the supposed benefits come from writing tests before code or maybe from high iterativity/short development cycles.

Objective

This paper describes a family of controlled experiments comparing test-driven development to micro iterative test-last development with emphasis on productivity, code properties (external quality and complexity) and tests (code coverage and fault-finding capabilities).

Method

Subjects were randomly assigned to test-driven and test-last groups. Controlled experiments were conducted for two years, in an academic environment and in different developer contexts (pair programming and individual programming contexts). Number of successfully implemented stories, percentage of successful acceptance tests, McCabe’s code complexity, code coverage and mutation score indicator were measured.

Results

Experimental results and their selective meta-analysis show no statistically significant differences between test-driven development and iterative test-last development regarding productivity (χ2(6) = 4.799, p = 1.0, r = .107, 95% CI (confidence interval): −.149 to .349), code complexity (χ2(6) = 8.094, p = .46, r = .048, 95% CI: −.254 to .341), branch coverage (χ2(6) = 13.996, p = .059, r = .182, 95% CI: −.081 to .421), percentage of acceptance tests passed (one experiment, Mann-Whitney = 125.0, p = .98, r = .066) and mutation score indicator (χ2(4) = 3.807, p = .87, r = .128, 95% CI: −.162 to .398).

Conclusion

According to our findings, the benefits of test-driven development compared to iterative test-last development are small and thus in practice relatively unimportant, although effects are positive. There is an indication of test-driven development endorsing better branch coverage, but effect size is considered small.  相似文献   

17.
18.
In this paper we present a linear-time algorithm for approximating a set ofn points by a linear function, or a line, that minimizes theL 1 norm. The algorithmic complexity of this problem appears not to have been investigated, although anO(n 3) naive algorithm can be easily obtained based on some simple characteristics of an optimumL 1 solution. Our linear-time algorithm is optimal within a constant factor and enables us to use linearL 1 approximation of many points in practice. The complexity ofL 1 linear approximation of a piecewise linear function is also touched upon.  相似文献   

19.
Hardware implementation of multiplication in finite field GF(2m) based on sparse polynomials is found to be advantageous in terms of space-complexity as well as the time-complexity. In this paper, we present a new permutation method to construct the irreducible like-trinomials of the form (x + 1)m + (x + 1)n + 1 for the implementation of efficient bit-parallel multipliers. For implementing the multiplications based on such polynomials, we have defined a like-polynomial basis (LPB) as an alternative to the original polynomial basis of GF(2m). We have shown further that the modular arithmetic for the binary field based on like-trinomials is equivalent to the arithmetic for the field based on trinomials. In order to design multipliers for composite fields, we have found another permutation polynomial to convert irreducible polynomials into like-trinomials of the forms (x2 + x + 1)m + (x2 + x + 1)n + 1, (x2 + x)m + (x2 + x)n + 1 and (x4 + x + 1)m + (x4 + x + 1)n + 1. The proposed bit-parallel multiplier over GF(24m) is found to offer a saving of about 33% multiplications and 42.8% additions over the corresponding existing architectures.  相似文献   

20.
Legendre orthogonal moments have been widely used in the field of image analysis. Because their computation by a direct method is very time expensive, recent efforts have been devoted to the reduction of computational complexity. Nevertheless, the existing algorithms are mainly focused on binary images. We propose here a new fast method for computing the Legendre moments, which is not only suitable for binary images but also for grey level images. We first establish a recurrence formula of one-dimensional (1D) Legendre moments by using the recursive property of Legendre polynomials. As a result, the 1D Legendre moments of order p, Lp=Lp(0), can be expressed as a linear combination of Lp-1(1) and Lp-2(0). Based on this relationship, the 1D Legendre moments Lp(0) can thus be obtained from the arrays of L1(a) and L0(a), where a is an integer number less than p. To further decrease the computation complexity, an algorithm, in which no multiplication is required, is used to compute these quantities. The method is then extended to the calculation of the two-dimensional Legendre moments Lpq. We show that the proposed method is more efficient than the direct method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号