首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 546 毫秒
1.
We propose a new fluid modeling technique aimed at incorporating stochastic turbulence into a widely used Lagrangian fluid solver, the Smoothed Particle Hydrodynamics (SPH) method. We add to each SPH particle a swirling probability to model its likelihood to act as a swirling incentive particle (SIP). Particles are selected as a SIP randomly based on the probability, and a SIP spins its neighboring particles to rotate around itself by applying rotational force. The force is computed from a swirling vorticity of the SIP. We model the production, development, and spreading of the swirling probability and vorticity among all SPH particles. The algorithm inherently implements preferred turbulence evolution including vortex aggregation and decay. The turbulent effects are non-repeating and easily controlled by animators. Our method is fully integrated with the SPH scheme with minimal extra memory usage, computational load, and programming efforts.  相似文献   

2.
We describe a robust but simple algorithm to reconstruct a surface from a set of merged range scans. Our key contribution is the formulation of the surface reconstruction problem as an energy minimisation problem that explicitly models the scanning process. The adaptivity of the Delaunay triangulation is exploited by restricting the energy to inside/outside labelings of Delaunay tetrahedra. Our energy measures both the output surface quality and how well the surface agrees with soft visibility constraints. Such energy is shown to perfectly fit into the minimum s ? t cuts optimisation framework, allowing fast computation of a globally optimal tetrahedra labeling, while avoiding the “shrinking bias” that usually plagues graph cuts methods. The behaviour of our method confronted to noise, undersampling and outliers is evaluated on several data sets and compared with other methods through different experiments: its strong robustness would make our method practical not only for reconstruction from range data but also from typically more difficult dense point clouds, resulting for instance from stereo image matching. Our effective modeling of the surface acquisition inverse problem, along with the unique combination of Delaunay triangulation and minimum s ? t cuts, makes the computational requirements of the algorithm scale well with respect to the size of the input point cloud.  相似文献   

3.
In this paper we present extended definitions of k-anonymity and use them to prove that a given data mining model does not violate the k-anonymity of the individuals represented in the learning examples. Our extension provides a tool that measures the amount of anonymity retained during data mining. We show that our model can be applied to various data mining problems, such as classification, association rule mining and clustering. We describe two data mining algorithms which exploit our extension to guarantee they will generate only k-anonymous output, and provide experimental results for one of them. Finally, we show that our method contributes new and efficient ways to anonymize data and preserve patterns during anonymization.  相似文献   

4.
We present an importance sampling method for the bidirectional scattering distribution function (bsdf) of hair. Our method is based on the multi‐lobe hair scattering model presented by Sadeghi et al. [ [SPJT10] ]. We reduce noise by drawing samples from a distribution that approximates the bsdf well. Our algorithm is efficient and easy to implement, since the sampling process requires only the evaluation of a few analytic functions, with no significant memory overhead or need for precomputation. We tested our method in a research raytracer and a production renderer based on micropolygon rasterization. We show significant improvements for rendering direct illumination using multiple importance sampling and for rendering indirect illumination using path tracing.  相似文献   

5.
Modern multicore processors, such as the Cell Broadband Engine, achieve high performance by equipping accelerator cores with small “scratch-pad” memories. The price for increased performance is higher programming complexity – the programmer must manually orchestrate data movement using direct memory access (DMA) operations. Programming using asynchronous DMA operations is error-prone, and DMA races can lead to nondeterministic bugs which are hard to reproduce and fix. We present a method for DMA race analysis in C programs. Our method works by automatically instrumenting a program with assertions modeling the semantics of a memory flow controller. The instrumented program can then be analyzed using state-of-the-art software model checkers. We show that bounded model checking is effective for detecting DMA races in buggy programs. To enable automatic verification of the correctness of instrumented programs, we present a new formulation of k-induction geared towards software, as a proof rule operating on loops. Our techniques are implemented as a tool, Scratch, which we apply to a large set of programs supplied with the IBM Cell SDK, in which we discover a previously unknown bug. Our experimental results indicate that our k-induction method performs extremely well on this problem class. To our knowledge, this marks both the first application of k-induction to software verification, and the first example of software model checking in the context of heterogeneous multicore processors.  相似文献   

6.
What can two images tell us about a third one?   总被引:4,自引:0,他引:4  
This paper discusses the problem of predicting image features in an image from image features in two other images and the epipolar geometry between the three images. We adopt the most general camera model of perspective projection and show that a point can be predicted in the third image as a bilinear function of its images in the first two cameras, that the tangents to three corresponding curves are related by a trilinear function, and that the curvature of a curve in the third image is a linear function of the curvatures at the corresponding points in the other two images. Our analysis relies heavily on the use of the fundamental matrix which has been recently introduced (Faugeras et al, 1992) and on the properties of a special plane which we call the trifocal plane. Though the trinocular geometry of points and lines has been very recently addressed, our use of the differential properties of curves for prediction is unique.We thus completely solve the following problem: given two views of an object, predict what a third view would look like. The problem and its solution bear upon several areas of computer vision, stereo, motion analysis, and model-based object recognition. Our answer is quite general since it assumes the general perspective projection model for image formation and requires only the knowledge of the epipolar geometry for the triple of views. We show that in the special case of orthographic projection our results for points reduce to those of Ullman and Basri (Ullman and Basri, 1991). We demonstrate on synthetic as well as on real data the applicability of our theory.  相似文献   

7.

Policymakers and analysts are heavily promoting data marketplaces to foster data trading between companies. Existing business model literature covers individually owned, multilateral data marketplaces. However, these particular types of data marketplaces hardly reach commercial exploitation. This paper develops business model archetypes for the full array of data marketplace types, ranging from private to independent ownership and from a hierarchical to a market orientation. Through exploratory interviews and case analyses, we create a business model taxonomy. Patterns in our taxonomy reveal four business model archetypes. We find that privately-owned data marketplaces with hierarchical orientation apply the aggregating data marketplace archetype. Consortium-owned data marketplaces apply the archetypes of aggregating data marketplace with additional brokering service and consulting data marketplace. Independently owned data marketplaces with market orientation apply the facilitating data marketplace archetype. Our results provide a basis for configurational theory that explains the performance of data marketplace business models. Our results also provide a basis for specifying boundary conditions for theory on data marketplace business models, as, for instance, the importance of network effects differs strongly between the archetypes.

  相似文献   

8.
We present an unsupervised approach for learning a layered representation of a scene from a video for motion segmentation. Our method is applicable to any video containing piecewise parametric motion. The learnt model is a composition of layers, which consist of one or more segments. The shape of each segment is represented using a binary matte and its appearance is given by the rgb value for each point belonging to the matte. Included in the model are the effects of image projection, lighting, and motion blur. Furthermore, spatial continuity is explicitly modeled resulting in contiguous segments. Unlike previous approaches, our method does not use reference frame(s) for initialization. The two main contributions of our method are: (i) A novel algorithm for obtaining the initial estimate of the model by dividing the scene into rigidly moving components using efficient loopy belief propagation; and (ii) Refining the initial estimate using α β-swap and α-expansion algorithms, which guarantee a strong local minima. Results are presented on several classes of objects with different types of camera motion, e.g. videos of a human walking shot with static or translating cameras. We compare our method with the state of the art and demonstrate significant improvements.  相似文献   

9.
Song  Hwanjun  Kim  Sundong  Kim  Minseok  Lee  Jae-Gil 《Machine Learning》2020,109(9-10):1837-1853

Neural networks converge faster with help from a smart batch selection strategy. In this regard, we propose Ada-Boundary, a novel and simple adaptive batch selection algorithm that constructs an effective mini-batch according to the learning progress of the model. Our key idea is to exploit confusing samples for which the model cannot predict labels with high confidence. Thus, samples near the current decision boundary are considered to be the most effective for expediting convergence. Taking advantage of this design, Ada-Boundary maintained its dominance for various degrees of training difficulty. We demonstrate the advantage of Ada-Boundary by extensive experimentation using CNNs with five benchmark data sets. Ada-Boundary was shown to produce a relative improvement in test errors by up to 31.80% compared with the baseline for a fixed wall-clock training time, thereby achieving a faster convergence speed.

  相似文献   

10.
Standard practice in building models in software engineering normally involves three steps: collecting domain knowledge (previous results, expert knowledge); building a skeleton of the model based on step 1 including as yet unknown parameters; estimating the model parameters using historical data. Our experience shows that it is extremely difficult to obtain reliable data of the required granularity, or of the required volume with which we could later generalize our conclusions. Therefore, in searching for a method for building a model we cannot consider methods requiring large volumes of data. This paper discusses an experiment to develop a causal model (Bayesian net) for predicting the number of residual defects that are likely to be found during independent testing or operational usage. The approach supports (1) and (2), does not require (3), yet still makes accurate defect predictions (an R 2 of 0.93 between predicted and actual defects). Since our method does not require detailed domain knowledge it can be applied very early in the process life cycle. The model incorporates a set of quantitative and qualitative factors describing a project and its development process, which are inputs to the model. The model variables, as well as the relationships between them, were identified as part of a major collaborative project. A dataset, elicited from 31 completed software projects in the consumer electronics industry, was gathered using a questionnaire distributed to managers of recent projects. We used this dataset to validate the model by analyzing several popular evaluation measures (R 2, measures based on the relative error and Pred). The validation results also confirm the need for using the qualitative factors in the model. The dataset may be of interest to other researchers evaluating models with similar aims. Based on some typical scenarios we demonstrate how the model can be used for better decision support in operational environments. We also performed sensitivity analysis in which we identified the most influential variables on the number of residual defects. This showed that the project size, scale of distributed communication and the project complexity cause the most of variation in number of defects in our model. We make both the dataset and causal model available for research use.  相似文献   

11.
This paper presents a novel intonation modelling approach and demonstrates its applicability using the Standard Yorùbá language. Our approach is motivated by the theory that abstract and realised forms of intonation and other dimensions of prosody should be modelled within a modular and unified framework. In our model, this framework is implemented using the Relational Tree (R-Tree) technique. The R-Tree is a sophisticated data structure for representing a multi-dimensional waveform in the form of a tree.Our R-Tree for an utterance is generated in two steps. First, the abstract structure of the waveform, called the Skeletal Tree (S-Tree), is generated using tone phonological rules for the target language. Second, the numerical values of the perceptually significant peaks and valleys on the S-Tree are computed using a fuzzy logic based model. The resulting points are then joined by applying interpolation techniques. The actual intonation contour is synthesised by Pitch Synchronous Overlap Technique (PSOLA) using the Praat software.We performed both quantitative and qualitative evaluations of our model. The preliminary results suggest that, although the model does not predict the numerical speech data as accurately as contemporary data-driven approaches, it produces synthetic speech with comparable intelligibility and naturalness. Furthermore, our model is easy to implement, interpret and adapt to other tone languages.  相似文献   

12.
The block-cyclic data distribution is commonly used to organize array elements over the processors of a coarse-grained distributed memory parallel computer. In many scientific applications, the data layout must be reorganized at run-time in order to enhance locality and reduce remote memory access overheads. In this paper we present a general framework for developing array redistribution algorithms. Using this framework, we have developed efficient algorithms that redistribute an array from one block-cyclic layout to another. Block-cyclic redistribution consists of index set computation , wherein the destination locations for individual data blocks are calculated, and data communication , wherein these blocks are exchanged between processors. The framework treats both these operations in a uniform and integrated way. We have developed efficient and distributed algorithms for index set computation that do not require any interprocessor communication. To perform data communication in a conflict-free manner, we have developed direct indirect and hybrid algorithms. In the direct algorithm, a data block is transferred directly to its destination processor. In an indirect algorithm, data blocks are moved from source to destination processors through intermediate relay processors. The hybrid algorithm is a combination of the direct and indirect algorithms. Our framework is based on a generalized circulant matrix formalism of the redistribution problem and a general purpose distributed memory model of the parallel machine. Our algorithms sustain excellent performance over a wide range of problem and machine parameters. We have implemented our algorithms using MPI, to allow for easy portability across different HPC platforms. Experimental results on the IBM SP-2 and the Cray T3D show superior performance over previous approaches. When the block size of the cyclic data layout changes by a factor of K , the redistribution can be performed in O( log K) communication steps. This is true even when K is a prime number. In contrast, previous approaches take O(K) communication steps for redistribution. Our framework can be used for developing scalable redistribution libraries, for efficiently implementing parallelizing compiler directives, and for developing parallel algorithms for various applications. Redistribution algorithms are especially useful in signal processing applications, where the data access patterns change significantly between computational phases. They are also necessary in linear algebra programs, to perform matrix transpose operations. Received June 1, 1997; revised March 10, 1998.  相似文献   

13.
We propose a novel framework for automatic discovering and learning of behavioural context for video-based complex behaviour recognition and anomaly detection. Our work differs from most previous efforts on learning visual context in that our model learns multi-scale spatio-temporal rather than static context. Specifically three types of behavioural context are investigated: behaviour spatial context, behaviour correlation context, and behaviour temporal context. To that end, the proposed framework consists of an activity-based semantic scene segmentation model for learning behaviour spatial context, and a cascaded probabilistic topic model for learning both behaviour correlation context and behaviour temporal context at multiple scales. These behaviour context models are deployed for recognising non-exaggerated multi-object interactive and co-existence behaviours in public spaces. In particular, we develop a method for detecting subtle behavioural anomalies against the learned context. The effectiveness of the proposed approach is validated by extensive experiments carried out using data captured from complex and crowded outdoor scenes.  相似文献   

14.
We present a new real‐time approach to simulate deformable objects using a learnt statistical model to achieve a high degree of realism. Our approach improves upon state‐of‐the‐art interactive shape‐matching meshless simulation methods by not only capturing important nuances of an object's kinematics but also of its dynamic texture variation. We are able to achieve this in an automated pipeline from data capture to simulation. Our system allows for the capture of idiosyncratic characteristics of an object's dynamics which for many simulations (e.g. facial animation) is essential. We allow for the plausible simulation of mechanically complex objects without knowledge of their inner workings. The main idea of our approach is to use a flexible statistical model to achieve a geometrically‐driven simulation that allows for arbitrarily complex yet easily learned deformations while at the same time preserving the desirable properties (stability, speed and memory efficiency) of current shape‐matching simulation systems. The principal advantage of our approach is the ease with which a pseudo‐mechanical model can be learned from 3D scanner data to yield realistic animation. We present examples of non‐trivial biomechanical objects simulated on a desktop machine in real‐time, demonstrating superior realism over current geometrically motivated simulation techniques.  相似文献   

15.
Simulation and visualization of aeolian sand movement and sand ripple evolution are a challenging subject. In this paper, we propose a physically based modeling and simulating method that can be used to synthesize sandy terrain in various patterns. Our method is based on the mechanical behavior of individual sand grains, which are widely studied in the physics of blown sand. We accounted significant mechanisms of sand transportation into the sand model, such as saltation, successive saltation and collapsing, while simplified the vegetation model and wind field model to make the simulation feasible and affordable. We implemented the proposed method on the programming graphics processing unit (GPU) to get real-time simulation and rendering. Finally, we proved that our method can reflect many characteristics of sand ripple evolution through several demonstrations. We also gave several synthesized desert scenes made from the simulated height field to display its significance on application.  相似文献   

16.
Robust Higher Order Potentials for Enforcing Label Consistency   总被引:2,自引:0,他引:2  
This paper proposes a novel framework for labelling problems which is able to combine multiple segmentations in a principled manner. Our method is based on higher order conditional random fields and uses potentials defined on sets of pixels (image segments) generated using unsupervised segmentation algorithms. These potentials enforce label consistency in image regions and can be seen as a generalization of the commonly used pairwise contrast sensitive smoothness potentials. The higher order potential functions used in our framework take the form of the Robust P n model and are more general than the P n Potts model recently proposed by Kohli et al. We prove that the optimal swap and expansion moves for energy functions composed of these potentials can be computed by solving a st-mincut problem. This enables the use of powerful graph cut based move making algorithms for performing inference in the framework. We test our method on the problem of multi-class object segmentation by augmenting the conventional crf used for object segmentation with higher order potentials defined on image regions. Experiments on challenging data sets show that integration of higher order potentials quantitatively and qualitatively improves results leading to much better definition of object boundaries. We believe that this method can be used to yield similar improvements for many other labelling problems.  相似文献   

17.
Exploratory spatial analysis is increasingly necessary as larger spatial data is managed in electro-magnetic media. We propose an exploratory method that reveals a robust clustering hierarchy from 2-D point data. Our approach uses the Delaunay diagram to incorporate spatial proximity. It does not require prior knowledge about the data set, nor does it require preconditions. Multi-level clusters are successfully discovered by this new method in only O(nlogn) time, where n is the size of the data set. The efficiency of our method allows us to construct and display a new type of tree graph that facilitates understanding of the complex hierarchy of clusters. We show that clustering methods adopting a raster-like or vector-like representation of proximity are not appropriate for spatial clustering. We conduct an experimental evaluation with synthetic data sets as well as real data sets to illustrate the robustness of our method.  相似文献   

18.
Computer perception of biological motion is key to developing convenient and powerful human–computer interfaces. Algorithms have been developed for tracking the body; however, initialization is done by hand. We propose a method for detecting a moving human body and for labeling its parts automatically in scenes that include extraneous motions and occlusion. We assume a Johansson display, i.e., that a number of moving features, some representing the unoccluded body joints and some belonging to the background, are supplied in the scene. Our method is based on maximizing the joint probability density function (PDF) of the position and velocity of the body parts. The PDF is estimated from training data. Dynamic programming is used for calculating efficiently the best global labeling on an approximation of the PDF. Detection is performed by hypothesis testing on the best labeling found. The computational cost is on the order ofN4 where N is the number of features detected. We explore the performance of our method with experiments carried on a variety of periodic and nonperiodic body motions viewed monocularly for a total of approximately 30,000 frames. The algorithm is demonstrated to be accurate and efficient.  相似文献   

19.
Improving Markov Chain Monte Carlo Model Search for Data Mining   总被引:9,自引:0,他引:9  
Giudici  Paolo  Castelo  Robert 《Machine Learning》2003,50(1-2):127-158
The motivation of this paper is the application of MCMC model scoring procedures to data mining problems, involving a large number of competing models and other relevant model choice aspects.To achieve this aim we analyze one of the most popular Markov Chain Monte Carlo methods for structural learning in graphical models, namely, the MC 3 algorithm proposed by D. Madigan and J. York (International Statistical Review, 63, 215–232, 1995). Our aim is to improve their algorithm to make it an effective and reliable tool in the field of data mining. In such context, typically highly dimensional in the number of variables, little can be known a priori and, therefore, a good model search algorithm is crucial.We present and describe in detail our implementation of the MC 3 algorithm, which provides an efficient general framework for computations with both Directed Acyclic Graphical (DAG) models and Undirected Decomposable Models (UDG). We believe that the possibility of commuting easily between the two classes of models constitutes an important asset in data mining, where an a priori knowledge of causal effects is usually difficult to establish.Furthermore, in order to improve the MC 3 method we propose provide several graphical monitors which can help extracting results and assessing the goodness of the Markov chain Monte Carlo approximation to the posterior distribution of interest.We apply our proposed methodology first to the well-known coronary heart disease dataset (D. Edwards &; T. Havránek, Biometrika, 72:2, 339–351, 1985). We then introduce a novel data mining application which concerns market basket analysis.  相似文献   

20.
Our aim is to develop new database technologies for the approximate matching of unstructured string data using indexes. We explore the potential of the suffix tree data structure in this context. We present a new method of building suffix trees, allowing us to build trees in excess of RAM size, which has hitherto not been possible. We show that this method performs in practice as well as the O(n) method of Ukkonen [70]. Using this method we build indexes for 200 Mb of protein and 300 Mbp of DNA, whose disk-image exceeds the available RAM. We show experimentally that suffix trees can be effectively used in approximate string matching with biological data. For a range of query lengths and error bounds the suffix tree reduces the size of the unoptimised O(mn) dynamic programming calculation required in the evaluation of string similarity, and the gain from indexing increases with index size. In the indexes we built this reduction is significant, and less than 0.3% of the expected matrix is evaluated. We detail the requirements for further database and algorithmic research to support efficient use of large suffix indexes in biological applications. Received: November 1, 2001 / Accepted: March 2, 2002 Published online: September 25, 2002  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号