The Industrial Internet of Things is crucial for enterprise and country to drive the strategic upgrade and raise the level of national intelligent manufacturing. When pondering the IIoT industry evaluation, the corresponding dominating issues involve numerous indeterminacies. Spherical fuzzy set, portrayed by memberships of positive, neutral and negative, is a more efficient methods of seizing indeterminacy. In this article, firstly, the fire-new spherical fuzzy score function is explored for solving some suspensive comparison issues. Moreover, the objective weight and combined weight are determined by Renyi entropy method and non-linear weighted comprehensive method, respectively. Later, the multi-criteria decision making method based on combined compromise solutionis developed under spherical fuzzy environment. Finally, the corresponding method is effectively validated by the issue of IIoT industry evaluation. The main characteristics of the presented algorithm are: (1) without counterintuitive phenomena; (2) no division or antilogarithm by zero problem; (3) no square root by negative number issue; (4) no violation of the original definition issue.
稳定性应变式负荷传感器研究的热点之一。基于40Cr Ni Mo A疲劳应变—寿命曲线,分别设计了不同额定应变的柱式和轮辐式传感器,并通过对各传感器的疲劳加载及相应的计量性能检测,探索传感器计量性能随疲劳加载的变化规律。实验结果表明,传感器的额定应变越小,传感器的计量性能越稳定,传感器重复性和线性受外部循环加载影响较大,而滞后、蠕变和蠕变恢复对外部循环加载并不敏感,且相对轮辐式传感器,柱式传感器拥有更好的稳定性。 相似文献
Let γ(G) denote the domination number of a digraph G and let Cm□Cn denote the Cartesian product of Cm and Cn, the directed cycles of length m,n?2. In this paper, we determine the exact values: γ(C2□Cn)=n; γ(C3□Cn)=n if , otherwise, γ(C3□Cn)=n+1; if , otherwise, . 相似文献
It is difficult to find the optimal sparse solution of a manifold learning based dimensionality reduction algorithm. The lasso
or the elastic net penalized manifold learning based dimensionality reduction is not directly a lasso penalized least square
problem and thus the least angle regression (LARS) (Efron et al., Ann Stat 32(2):407–499, 2004), one of the most popular algorithms
in sparse learning, cannot be applied. Therefore, most current approaches take indirect ways or have strict settings, which
can be inconvenient for applications. In this paper, we proposed the manifold elastic net or MEN for short. MEN incorporates
the merits of both the manifold learning based dimensionality reduction and the sparse learning based dimensionality reduction.
By using a series of equivalent transformations, we show MEN is equivalent to the lasso penalized least square problem and
thus LARS is adopted to obtain the optimal sparse solution of MEN. In particular, MEN has the following advantages for subsequent
classification: (1) the local geometry of samples is well preserved for low dimensional data representation, (2) both the
margin maximization and the classification error minimization are considered for sparse projection calculation, (3) the projection
matrix of MEN improves the parsimony in computation, (4) the elastic net penalty reduces the over-fitting problem, and (5)
the projection matrix of MEN can be interpreted psychologically and physiologically. Experimental evidence on face recognition
over various popular datasets suggests that MEN is superior to top level dimensionality reduction algorithms. 相似文献
Learning from imperfect (noisy) information sources is a challenging and reality issue for many data mining applications. Common practices include data quality enhancement by applying data preprocessing techniques or employing robust learning algorithms to avoid developing overly complicated structures that overfit the noise. The essential goal is to reduce noise impact and eventually enhance the learners built from noise-corrupted data. In this paper, we propose a novel corrective classification (C2) design, which incorporates data cleansing, error correction, Bootstrap sampling and classifier ensembling for effective learning from noisy data sources. C2 differs from existing classifier ensembling or robust learning algorithms in two aspects. On one hand, a set of diverse base learners of C2 constituting the ensemble are constructed via a Bootstrap sampling process; on the other hand, C2 further improves each base learner by unifying error detection, correction and data cleansing to reduce noise impact. Being corrective, the classifier ensemble is built from data preprocessed/corrected by the data cleansing and correcting modules. Experimental comparisons demonstrate that C2 is not only more accurate than the learner built from original noisy sources, but also more reliable than Bagging [4] or aggressive classifier ensemble (ACE) [56], which are two degenerated components/variants of C2. The comparisons also indicate that C2 is more stable than Boosting and DECORATE, which are two state-of-the-art ensembling methods. For real-world imperfect information sources (i.e. noisy training and/or test data), C2 is able to deliver more accurate and reliable prediction models than its other peers can offer. 相似文献
Subspace selection approaches are powerful tools in pattern classification and data visualization. One of the most important subspace approaches is the linear dimensionality reduction step in the Fisher's linear discriminant analysis (FLDA), which has been successfully employed in many fields such as biometrics, bioinformatics, and multimedia information management. However, the linear dimensionality reduction step in FLDA has a critical drawback: for a classification task with c classes, if the dimension of the projected subspace is strictly lower than c - 1, the projection to a subspace tends to merge those classes, which are close together in the original feature space. If separate classes are sampled from Gaussian distributions, all with identical covariance matrices, then the linear dimensionality reduction step in FLDA maximizes the mean value of the Kullback-Leibler (KL) divergences between different classes. Based on this viewpoint, the geometric mean for subspace selection is studied in this paper. Three criteria are analyzed: 1) maximization of the geometric mean of the KL divergences, 2) maximization of the geometric mean of the normalized KL divergences, and 3) the combination of 1 and 2. Preliminary experimental results based on synthetic data, UCI Machine Learning Repository, and handwriting digits show that the third criterion is a potential discriminative subspace selection method, which significantly reduces the class separation problem in comparing with the linear dimensionality reduction step in FLDA and its several representative extensions. 相似文献