Epilepsy is a neurological disorder that may affect the autonomic nervous system (ANS) from 15 to 20 min before seizure onset, and disturbances of ANS affect R–R intervals (RRI) on an electrocardiogram (ECG). This study aims to develop a machine learning algorithm for predicting focal epileptic seizures by monitoring R–R interval (RRI) data in real time. The developed algorithm adopts a self-attentive autoencoder (SA-AE), which is a neural network for time-series data.
The results of applying the developed seizure prediction algorithm to clinical data demonstrated that it functioned well in most patients; however, false positives (FPs) occurred in specific participants. In a future work, we will investigate the causes of FPs and optimize the developing seizure prediction algorithm to further improve performance using newly added clinical data.
A new kind of the Vernier mechanism that is able to control the size of linear assembly of DNA origami nanostructures is proposed. The mechanism is realized by mechanical design of DNA origami, which consists of a hollow cylinder and a rotatable shaft in it connected through the same scaffold. This nanostructure stacks with each other by the shape complementarity at its top and bottom surfaces of the cylinder, while the number of stacking is limited by twisting angle of the shaft. Experiments have shown that the size distribution of multimeric assembly of the origami depends on the twisting angle of the shaft; the average lengths of the multimer are decamer, hexamer, and tetramer for 0°, 10°, and 20° twist, respectively. In summary, it is possible to affect the number of polymerization by adjusting the precise shape and movability of a molecular structure. 相似文献
OBJECTIVE: We previously showed that in asphyxiated fetal lambs the duration of hypotension correlated well with the severity of histologic damage to the brain, whereas the duration of bradycardia did not. This study compares fetal heart rate patterns with the degree of histologic damage to the brain. STUDY DESIGN: Twelve chronically instrumented near-term fetal lambs were subjected to asphyxia by umbilical cord occlusion until fetal arterial pH was <6. 9 and base excess was <-20 mEq/L. An additional 4 fetuses served as sham-asphyxia controls. Fetal heart rate (from electrocardiogram), arterial blood pressure, fetal breathing movements, and electrocorticogram were continuously monitored before, during, and for 72 hours after asphyxia. Fetal brain histologic features were categorized as mild (group 1, n = 5), moderate (group 2, n = 4), and severe (group 3, n = 3). Long-term fetal heart rate variability expressed as amplitude range was assessed visually every 5 minutes from 30 minutes before asphyxia until 2 hours of recovery and at 6, 12, 24, 48, and 72 hours of recovery. RESULTS: Long-term fetal heart rate variability amplitude decreased from 32 +/- 17 beats/min (mean +/- SEM) preocclusion to 4 +/- 13 beats/min at the end of occlusion (P <.001) without significant differences among the 3 groups. During 10 to 45 minutes of recovery, the long-term variability of group 1 was significantly greater than that of groups 2 and 3. At 24 to 72 hours of recovery, the long-term variability of groups 1 and 2 was significantly higher than that of group 3, which was almost 0. The "checkmark" and sinusoidal fetal heart rate patterns were observed during the recovery period in groups 2 and 3. CONCLUSIONS: Decreased long-term fetal heart rate variability and the "checkmark" and sinusoidal fetal heart rate patterns were indicators of the severity of asphyxial histologic damage in the fetal brain. 相似文献
The 360° profilometry of a three-dimensional (3-D) diffuse object by use of the light intersection and its image reconstruction by surface shading are presented. The lack of data in one direction, which was due to occlusion, was compensated by the projection of two lines of light from different directions. Some experiments to profile objects and their reconstruction by computer are shown. The entire surface model was constructed, and a real shading image was obtained by means of computer graphics. 相似文献
In social tagging systems such as Delicious and Flickr,users collaboratively manage tags to annotate resources.Naturally,a social tagging system can be modeled as a (user,tag,resource) hypernetwork,where there are three different types of nodes,namely users,resources and tags,and each hyperedge has three end nodes,connecting a user,a resource and a tag that the user employs to annotate the resource.Then how can we automatically cluster related users,resources and tags,respectively? This is a problem of community detection in a 3-partite,3-uniform hypernetwork.More generally,given a K-partite K-uniform (hyper)network,where each (hyper)edge is a K-tuple composed of nodes of K different types,how can we automatically detect communities for nodes of different types? In this paper,by turning this problem into a problem of finding an efficient compression of the (hyper)network’s structure,we propose a quality function for measuring the goodness of partitions of a K-partite K-uniform (hyper)network into communities,and develop a fast community detection method based on optimization.Our method overcomes the limitations of state of the art techniques and has several desired properties such as comprehensive,parameter-free,and scalable.We compare our method with existing methods in both synthetic and real-world datasets. 相似文献
Boosting is known as a gradient descent algorithm over loss functions. It is often pointed out that the typical boosting algorithm, Adaboost, is highly affected by outliers. In this letter, loss functions for robust boosting are studied. Based on the concept of robust statistics, we propose a transformation of loss functions that makes boosting algorithms robust against extreme outliers. Next, the truncation of loss functions is applied to contamination models that describe the occurrence of mislabels near decision boundaries. Numerical experiments illustrate that the proposed loss functions derived from the contamination models are useful for handling highly noisy data in comparison with other loss functions. 相似文献
In this paper, we present a new learning method using prior information for three-layer neural networks. Usually when neural
networks are used for identification of systems, all of their weights are trained independently, without considering interrelated
weights values. Thus, the training results are usually not good. The reason for this in that each parameter has its influence
on others during learning. To overcome this problem, we first give an exact mathematical equation that describes the relation
between weight values given a set of data conveying prior information. The we present a new learning method that trains part
of the weights and calculates the others using these exact mathematical equations. This method often a priori keeps the given
mathematical structure exactly the same during learning; in other words, training is done so that the network follows a predetermined
trajectory. Numerical computer simulation results are provided to support this approach.
This work was presented, in part, at the Fourth International Symposium on Artificial Life and Robotics, Oita, Japan, January
19–22, 1999. 相似文献
A rapid reinforcement method using low-power induction heating is developed. The reinforcement of a mass-spliced fiber ribbon unit comprising five graded-index multimode fibers was completed in 30 sec with good performance by supplying 30 W of electric power. 相似文献
The results of studies made from the standpoint of rheology on methods of predicting flows and deformations of fresh concrete and mortar as basic research for rationalization of concrete construction are described. This paper contains as follows:
method of measuring viscosity of fresh concrete and mortar;
method of estimating deformation of fresh concrete due to its own weight;
proposal of the “Inclined Pipe Test Method” as a new method of testing consistency of grout mortar.
Kernel methods are known to be effective for nonlinear multivariate analysis. One of the main issues in the practical use of kernel methods is the selection of kernel. There have been a lot of studies on kernel selection and kernel learning. Multiple kernel learning (MKL) is one of the promising kernel optimization approaches. Kernel methods are applied to various classifiers including Fisher discriminant analysis (FDA). FDA gives the Bayes optimal classification axis if the data distribution of each class in the feature space is a gaussian with a shared covariance structure. Based on this fact, an MKL framework based on the notion of gaussianity is proposed. As a concrete implementation, an empirical characteristic function is adopted to measure gaussianity in the feature space associated with a convex combination of kernel functions, and two MKL algorithms are derived. From experimental results on some data sets, we show that the proposed kernel learning followed by FDA offers strong classification power. 相似文献