This research aims to illustrate the potential use of concepts, techniques, and mining process tools to improve the systematic review process. Thus, a review was performed on two online databases (Scopus and ISI Web of Science) from 2012 to 2019. A total of 9649 studies were identified, which were analyzed using probabilistic topic modeling procedures within a machine learning approach. The Latent Dirichlet Allocation method, chosen for modeling, required the following stages: 1) data cleansing, and 2) data modeling into topics for coherence and perplexity analysis. All research was conducted according to the standards of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses in a fully computerized way. The computational literature review is an integral part of a broader literature review process. The results presented met three criteria: (1) literature review for a research area, (2) analysis and classification of journals, and (3) analysis and classification of academic and individual research teams. The contribution of the article is to demonstrate how the publication network is formed in this particular field of research, and how the content of abstracts can be automatically analyzed to provide a set of research topics for quick understanding and application in future projects.
Mathematical models for the evaluation of residence time distribution (RTD) curves on a large variety of vessels are presented.
These models have been constructed by combination of different tanks or volumes. In order to obtain a good representation
of RTD curves, a new volume (called convection diffusion volume) is introduced. The convection-diffusion volume allows the
approximation of different experimental or numerical RTD curves with very simple models. An algorithm has been developed to
calculate the parameters of the models for any given set of RTD curve experimental points. Validation of the models is carried
out by comparison with experimental RTD curves taken from the literature and with a numerical RTD curve obtained by three-dimensional
simulation of the flow inside a tundish. 相似文献
ABSTRACT: A quantitative procedure was developed to predict the composition of ternary ground spice mixtures using an electronic nose. Basil, cinnamon, and garlic were mixed in different compositions and presented to an e-nose. Nineteen training mixtures were used to build predictive models. Model performance was tested using 5 other mixtures. Three neural network structures—multilayer perceptron (MLP), MLP using principal component analysis as a preprocessor (PCA-MLP), and the time-delay neural network (TDNN)—were used for predictive model building. All 3 neural network models predicted the testing mixtures' compositions with a mean square error (MSE) equal or less than 0.0051 (in a fraction domain where sum of fractions = 1). The TDNN provided the smallest MSE. 相似文献
An interactive design and analysis tool for displaying and quantifying multiple channels of data is presented. The system allows one to easily visualize multiple data channels and simultaneously observe the effects of filters on the data and to evaluate signal detection algorithms. The software is designed for a workstation environment; it will find application in a variety of applications where one needs to simultaneously visualize multiple data channels. TDAT is being used for the design and evaluation of filters and detection algorithms for electroencephalogram (EEG) waveforms, and it is serving as a prototype of a paperless system to be used by electroencephalographers. This paper describes the general software structure of the system and illustrates many of the system features with examples. 相似文献
Multivariate density estimation is an important problem that is frequently encountered in statistical learning and signal processing. One of the most popular techniques is Parzen windowing, also referred to as kernel density estimation. Gaussianization is a procedure that allows one to estimate multivariate densities efficiently from the marginal densities of the individual random variables. In this paper, we present an optimal density estimation scheme that combines the desirable properties of Parzen windowing and Gaussianization, using minimum Kullback–Leibler divergence as the optimality criterion for selecting the kernel size in the Parzen windowing step. The utility of the estimate is illustrated in classifier design, independent components analysis, and Prices’ theorem. 相似文献
We propose the use of optimized brain-machine interface (BMI) models for interpreting the spatial and temporal neural activity generated in motor tasks. In this study, a nonlinear dynamical neural network is trained to predict the hand position of primates from neural recordings in a reaching task paradigm. We first develop a method to reveal the role attributed by the model to the sampled motor, premotor, and parietal cortices in generating hand movements. Next, using the trained model weights, we derive a temporal sensitivity measure to asses how the model utilized the sampled cortices and neurons in real-time during BMI testing. 相似文献
This paper investigates the application of error-entropy minimization algorithms to digital communications channel equalization. The pdf of the error between the training sequence and the output of the equalizer is estimated using the Parzen windowing method with a Gaussian kernel, and then, the Renyi's quadratic entropy is minimized using a gradient descent algorithm. By estimating Renyi's entropy over a short sliding window, an online training algorithm is also introduced. Moreover, for a linear equalizer, an orthogonality condition for the minimum entropy solution that leads to an alternative fixed-point iterative minimization method is derived. The performance of linear and nonlinear equalizers trained with entropy and mean square error (MSE) is compared. As expected, the results of training a linear equalizer are very similar for both criteria since, even if the input noise is non-Gaussian, the output filtered noise tends to be Gaussian. On the other hand, for nonlinear channels and using a multilayer perceptron (MLP) as the equalizer, differences between both criteria appear. Specifically, it is shown that the additional information used by the entropy criterion yields a faster convergence in comparison with the MSE 相似文献
This paper presents a theoretical approach to understand the basic dynamics of a hierarchical and realistic computational model of the olfactory system proposed by W. J. Freeman. While the system's parameter space could be scanned to obtain the desired dynamical behavior, our approach exploits the hierarchical organization and focuses on understanding the simplest building block of this highly connected network. Based on bifurcation analysis, we obtain analytical solutions of how to control the qualitative behavior of a reduced KII set taking into consideration both the internal coupling coefficients and the external stimulus. This also provides useful insights for investigating higher level structures that are composed of the same basic structure. Experimental results are presented to verify our theoretical analysis. 相似文献
We study the problem of linear approximation of a signal using the parametric gamma bases in L2 space. These bases have a time scale parameter, which has the effect of modifying the relative angle between the signal and the projection space, thereby yielding an extra degree of freedom in the approximation. Gamma bases have a simple analog implementation that is a cascade of identical lowpass filters. We derive the normal equation for the optimum value of the time scale parameter and decouple it from that of the basis weights. Using statistical signal processing tools, we further develop a numerical method for estimating the optimum time scale 相似文献
This paper presents a new loss function for neural network classification, inspired by the recently proposed similarity measure called Correntropy. We show that this function essentially behaves like the conventional square loss for samples that are well within the decision boundary and have small errors, and L0 or counting norm for samples that are outliers or are difficult to classify. Depending on the value of the kernel size parameter, the proposed loss function moves smoothly from convex to non-convex and becomes a close approximation to the misclassification loss (ideal 0–1 loss). We show that the discriminant function obtained by optimizing the proposed loss function in the neighborhood of the ideal 0–1 loss function to train a neural network is immune to overfitting, more robust to outliers, and has consistent and better generalization performance as compared to other commonly used loss functions, even after prolonged training. The results also show that it is a close competitor to the SVM. Since the proposed method is compatible with simple gradient based online learning, it is a practical way of improving the performance of neural network classifiers. 相似文献