首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The analysis and exploration of multidimensional and multivariate data is still one of the most challenging areas in the field of visualization. In this paper, we describe an approach to visual analysis of an especially challenging set of problems that exhibit a complex internal data structure. We describe the interactive visual exploration and analysis of data that includes several (usually large) families of function graphs fi(x, t). We describe analysis procedures and practical aspects of the interactive visual analysis specific to this type of data (with emphasis on the function graph characteristic of the data). We adopted the well-proven approach of multiple, linked views with advanced interactive brushing to assess the data. Standard views such as histograms, scatterplots, and parallel coordinates are used to jointly visualize data. We support iterative visual analysis by providing means to create complex, composite brushes that span multiple views and that are constructed using different combination schemes. We demonstrate that engineering applications represent a challenging but very applicable area for visual analytics. As a case study, we describe the optimization of a fuel injection system in diesel engines of passenger cars  相似文献   

2.
Blood flow and derived data are essential to investigate the initiation and progression of cerebral aneurysms as well as their risk of rupture. An effective visual exploration of several hemodynamic attributes like the wall shear stress (WSS) and the inflow jet is necessary to understand the hemodynamics. Moreover, the correlation between focus-and-context attributes is of particular interest. An expressive visualization of these attributes and anatomic information requires appropriate visualization techniques to minimize visual clutter and occlusions. We present the FLOWLENS as a focus-and-context approach that addresses these requirements. We group relevant hemodynamic attributes to pairs of focus-and-context attributes and assign them to different anatomic scopes. For each scope, we propose several FLOWLENS visualization templates to provide a flexible visual filtering of the involved hemodynamic pairs. A template consists of the visualization of the focus attribute and the additional depiction of the context attribute inside the lens. Furthermore, the FLOWLENS supports local probing and the exploration of attribute changes over time. The FLOWLENS minimizes visual cluttering, occlusions, and provides a flexible exploration of a region of interest. We have applied our approach to seven representative datasets, including steady and unsteady flow data from CFD simulations and 4D PC-MRI measurements. Informal user interviews with three domain experts confirm the usefulness of our approach.  相似文献   

3.
Graph analysis by data visualization involves achieving a series of topology-based tasks. When the graph data belongs to a data domain that contains multiple node and link types, as in the case of semantic graphs, topology-based tasks become more challenging. To reduce visual complexity in semantic graphs, we propose an approach which is based on applying relational operations such as selecting and joining nodes of different types. We use node aggregation to reflect the relational operations to the graph. We introduce glyphs for representing aggregated nodes. Using glyphs lets us encode connectivity information of multiple nodes with a single glyph. We also use visual parameters of the glyph to encode node attributes or type specific information. Rather than doing the operations in the data abstraction layer and presenting the user with the resulting visualization, we propose an interactive approach where the user can iteratively apply the relational operations directly on the visualization. We present the efficiency of our method by the results of a usability study that includes a case study on a subset of the International Movie Database. The results of the controlled experiment in our usability study indicate a statistically significant contribution in reducing the completion time of the evaluation tasks.  相似文献   

4.
Many origin‐destination datasets have become available in the recent years, e.g. flows of people, animals, money, material, or network traffic between pairs of locations, but appropriate techniques for their exploration still have to be developed. Especially, supporting the analysis of datasets with a temporal dimension remains a significant challenge. Many techniques for the exploration of spatio‐temporal data have been developed, but they prove to be only of limited use when applied to temporal origin‐destination datasets. We present Flowstrates , a new interactive visualization approach in which the origins and the destinations of the flows are displayed in two separate maps, and the changes over time of the flow magnitudes are represented in a separate heatmap view in the middle. This allows the users to perform spatial visual queries, focusing on different regions of interest for the origins and destinations, and to analyze the changes over time provided with the means of flow ordering, filtering and aggregation in the heatmap. In this paper, we discuss the challenges associated with the visualization of temporal origin‐destination data, introduce our solution, and present several usage scenarios showing how the tool we have developed supports them.  相似文献   

5.
In this paper we propose an approach in which interactive visualization and analysis are combined with batch tools for the processing of large data collections. Large and heterogeneous data collections are difficult to analyze and pose specific problems to interactive visualization. Application of the traditional interactive processing and visualization approaches as well as batch processing encounter considerable drawbacks for such large and heterogeneous data collections due to the amount and type of data. Computing resources are not sufficient for interactive exploration of the data and automated analysis has the disadvantage that the user has only limited control and feedback on the analysis process. In our approach, an analysis procedure with features and attributes of interest for the analysis is defined interactively. This procedure is used for off-line processing of large collections of data sets. The results of the batch process along with "visual summaries" are used for further analysis. Visualization is not only used for the presentation of the result, but also as a tool to monitor the validity and quality of the operations performed during the batch process. Operations such as feature extraction and attribute calculation of the collected data sets are validated by visual inspection. This approach is illustrated by an extensive case study, in which a collection of confocal microscopy data sets is analyzed.  相似文献   

6.
Categorical data dimensions appear in many real-world data sets, but few visualization methods exist that properly deal with them. Parallel Sets are a new method for the visualization and interactive exploration of categorical data that shows data frequencies instead of the individual data points. The method is based on the axis layout of parallel coordinates, with boxes representing the categories and parallelograms between the axes showing the relations between categories. In addition to the visual representation, we designed a rich set of interactions. Parallel Sets allow the user to interactively remap the data to new categorizations and, thus, to consider more data dimensions during exploration and analysis than usually possible. At the same time, a metalevel, semantic representation of the data is built. Common procedures, like building the cross product of two or more dimensions, can be performed automatically, thus complementing the interactive visualization. We demonstrate Parallel Sets by analyzing a large CRM data set, as well as investigating housing data from two US states.  相似文献   

7.
We develop an interactive analysis and visualization tool for probabilistic segmentation in medical imaging. The originality of our approach is that the data exploration is guided by shape and appearance knowledge learned from expert-segmented images of a training population. We introduce a set of multidimensional transfer function widgets to analyze the multivariate probabilistic field data. These widgets furnish the user with contextual information about conformance or deviation from the population statistics. We demonstrate the user's ability to identify suspicious regions (e.g. tumors) and to correct the misclassification results. We evaluate our system and demonstrate its usefulness in the context of static anatomical and time-varying functional imaging datasets.  相似文献   

8.
Scatterplots remain one of the most popular and widely-used visual representations for multidimensional data due to their simplicity, familiarity and visual clarity, even if they lack some of the flexibility and visual expressiveness of newer multidimensional visualization techniques. This paper presents new interactive methods to explore multidimensional data using scatterplots. This exploration is performed using a matrix of scatterplots that gives an overview of the possible configurations, thumbnails of the scatterplots, and support for interactive navigation in the multidimensional space. Transitions between scatterplots are performed as animated rotations in 3D space, somewhat akin to rolling dice. Users can iteratively build queries using bounding volumes in the dataset, sculpting the query from different viewpoints to become more and more refined. Furthermore, the dimensions in the navigation space can be reordered, manually or automatically, to highlight salient correlations and differences among them. An example scenario presents the interaction techniques supporting smooth and effortless visual exploration of multidimensional datasets.  相似文献   

9.
The analysis of ocean and atmospheric datasets offers a unique set of challenges to scientists working in different application areas. These challenges include dealing with extremely large volumes of multidimensional data, supporting interactive visual analysis, ensembles exploration and visualization, exploring model sensitivities to inputs, mesoscale ocean features analysis, predictive analytics, heterogeneity and complexity of observational data, representing uncertainty, and many more. Researchers across disciplines collaborate to address such challenges, which led to significant research and development advances in ocean and atmospheric sciences, and also in several relevant areas such as visualization and visual analytics, big data analytics, machine learning and statistics. In this report, we perform an extensive survey of research advances in the visual analysis of ocean and atmospheric datasets. First, we survey the task requirements by conducting interviews with researchers, domain experts, and end users working with these datasets on a spectrum of analytics problems in the domain of ocean and atmospheric sciences. We then discuss existing models and frameworks related to data analysis, sense‐making, and knowledge discovery for visual analytics applications. We categorize the techniques, systems, and tools presented in the literature based on the taxonomies of task requirements, interaction methods, visualization techniques, machine learning and statistical methods, evaluation methods, data types, data dimensions and size, spatial scale and application areas. We then evaluate the task requirements identified based on our interviews with domain experts in the context of categorized research based on our taxonomies, and existing models and frameworks of visual analytics to determine the extent to which they fulfill these task requirements, and identify the gaps in current research. In the last part of this report, we summarize the trends, challenges, and opportunities for future research in this area. (see http://www.acm.org/about/class/class/2012 )  相似文献   

10.
Multi- and hyperspectral imaging and data analysis has been investigated in the last decades in the context of various fields of application like remote sensing or microscopic spectroscopy. However, recent developments in sensor technology and a growing number of application areas require a more generic view on data analysis, that clearly expands the current, domain-specific approaches. In this context, we address the problem of interactive exploration of multi- and hyperspectral data, consisting of (semi-)automatic data analysis and scientific visualization in a comprehensive fashion. In this paper, we propose an approach that enables a generic interactive exploration and easy segmentation of multi- and hyperspectral data, based on characterizing spectra of an individual dataset, the so-called endmembers. Using the concepts of existing endmember extraction algorithms, we derive a visual analysis system, where the characteristic spectra initially identified serve as input to interactively tailor a problem-specific visual analysis by means of visual exploration. An optional outlier detection improves the robustness of the endmember detection and analysis. An adequate system feedback of the costly unmixing procedure for the spectral data with respect to the current set of endmembers is ensured by a novel technique for progressive unmixing and view update which is applied at user modification. The progressive unmixing is based on an efficient prediction scheme applied to previous unmixing results. We present a detailed evaluation of our system in terms of confocal Raman microscopy, common multispectral imaging and remote sensing.  相似文献   

11.
In many application fields, data analysts have to deal with datasets that contain many expressions per item. The effective analysis of such multivariate datasets is dependent on the user's ability to understand both the intrinsic dimensionality of the dataset as well as the distribution of the dependent values with respect to the dimensions. In this paper, we propose a visualization model that enables the joint interactive visual analysis of multivariate datasets with respect to their dimensions as well as with respect to the actual data values. We describe a dual setting of visualization and interaction in items space and in dimensions space. The visualization of items is linked to the visualization of dimensions with brushing and focus+context visualization. With this approach, the user is able to jointly study the structure of the dimensions space as well as the distribution of data items with respect to the dimensions. Even though the proposed visualization model is general, we demonstrate its application in the context of a DNA microarray data analysis.  相似文献   

12.
One of the most prominent topics in climate research is the investigation, detection, and allocation of climate change. In this paper, we aim at identifying regions in the atmosphere (e.g., certain height layers) which can act as sensitive and robust indicators for climate change. We demonstrate how interactive visual data exploration of large amounts of multi-variate and time-dependent climate data enables the steered generation of promising hypotheses for subsequent statistical evaluation. The use of new visualization and interaction technology--in the context of a coordinated multiple views framework--allows not only to identify these promising hypotheses, but also to efficiently narrow down parameters that are required in the process of computational data analysis. Two datasets, namely an ECHAM5 climate model run and the ERA-40 reanalysis incorporating observational data, are investigated. Higher-order information such as linear trends or signal-to-noise ratio is derived and interactively explored in order to detect and explore those regions which react most sensitively to climate change. As one conclusion from this study, we identify an excellent potential for usefully generalizing our approach to other, similar application cases, as well.  相似文献   

13.
How do we ensure the veracity of science? The act of manipulating or fabricating scientific data has led to many high-profile fraud cases and retractions. Detecting manipulated data, however, is a challenging and time-consuming endeavor. Automated detection methods are limited due to the diversity of data types and manipulation techniques. Furthermore, patterns automatically flagged as suspicious can have reasonable explanations. Instead, we propose a nuanced approach where experts analyze tabular datasets, e.g., as part of the peer-review process, using a guided, interactive visualization approach. In this paper, we present an analysis of how manipulated datasets are created and the artifacts these techniques generate. Based on these findings, we propose a suite of visualization methods to surface potential irregularities. We have implemented these methods in Ferret, a visualization tool for data forensics work. Ferret makes potential data issues salient and provides guidance on spotting signs of tampering and differentiating them from truthful data.  相似文献   

14.
Geospatial datasets from satellite observations and model simulations are becoming more accessible. These spatiotemporal datasets are relatively massive for visualization to support advanced analysis and decision making. A challenge to visualizing massive geospatial datasets is identifying critical spatial and temporal changes reflected in the data while maintaining high interactive rendering speed, even when data are accessed remotely. We propose a view-dependent spatiotemporal saliency-driven approach that facilitates the discovery of regions showing high levels of spatiotemporal variability and reduces the rendering intensity of interactive visualization. Our method is based on a novel definition of data saliency, a spatiotemporal tree structure to store visual saliency values, as well as a saliency-driven view-dependent level-of-detail (LOD) control. To demonstrate its applicability, we have implemented the approach with an open-source remote visualization package and conducted experiments with spatiotemporal datasets produced by a regional dust storm simulation model. The results show that the proposed method may not be outstanding in some specific situations, but it consistently performs very well across different settings according to different criteria.  相似文献   

15.
Social networks collected by historians or sociologists typically have a large number of actors and edge attributes. Applying social network analysis (SNA) algorithms to these networks produces additional attributes such as degree, centrality, and clustering coefficients. Understanding the effects of this plethora of attributes is one of the main challenges of multivariate SNA. We present the design of GraphDice, a multivariate network visualization system for exploring the attribute space of edges and actors. GraphDice builds upon the ScatterDice system for its main multidimensional navigation paradigm, and extends it with novel mechanisms to support network exploration in general and SNA tasks in particular. Novel mechanisms include visualization of attributes of interval type and projection of numerical edge attributes to node attributes. We show how these extensions to the original ScatterDice system allow to support complex visual analysis tasks on networks with hundreds of actors and up to 30 attributes, while providing a simple and consistent interface for interacting with network data.  相似文献   

16.
Parallel coordinate plots (PCPs) are commonly used in information visualization to provide insight into multi-variate data. These plots help to spot correlations between variables. PCPs have been successfully applied to unstructured datasets up to a few millions of points. In this paper, we present techniques to enhance the usability of PCPs for the exploration of large, multi-timepoint volumetric data sets, containing tens of millions of points per timestep. The main difficulties that arise when applying PCPs to large numbers of data points are visual clutter and slow performance, making interactive exploration infeasible. Moreover, the spatial context of the volumetric data is usually lost. We describe techniques for preprocessing using data quantization and compression, and for fast GPU-based rendering of PCPs using joint density distributions for each pair of consecutive variables, resulting in a smooth, continuous visualization. Also, fast brushing techniques are proposed for interactive data selection in multiple linked views, including a 3D spatial volume view. These techniques have been successfully applied to three large data sets: Hurricane Isabel (Vis'04 contest), the ionization front instability data set (Vis'08 design contest), and data from a large-eddy simulation of cumulus clouds. With these data, we show how PCPs can be extended to successfully visualize and interactively explore multi-timepoint volumetric datasets with an order of magnitude more data points.  相似文献   

17.
The problem of detecting community structures of a social network has been extensively studied over recent years, but most existing methods solely rely on the network structure and neglect the context information of the social relations.The main reason is that a context-rich network offers too much flexibility and complexity for automatic or manual modulation of the multifaceted context in the analysis process.We address the challenging problem of incorporating context information into the community analysis with a novel visual analysis mechanism.Our approach consists of two stages: interactive discovery of salient context, and iterative context-guided community detection.Central to the analysis process is a context relevance model (CRM) that visually characterizes the influence of a given set of contexts on the variation of the detected communities, and discloses the community structure in specific context configurations.The extracted relevance is used to drive an iterative visual reasoning process, in which the community structures are progressively discovered.We introduce a suite of visual representations to encode the community structures, the context as well as the CRM.In particular, we propose an enhanced parallel coordinates representation to depict the context and community structures, which allows for interactive data exploration and community investigation.Case studies on several datasets demonstrate the efficiency and accuracy of our approach.  相似文献   

18.
In this paper, we present an approach to interactive out-of-core volume data exploration that has been developed to augment the existing capabilities of the LhpBuilder software, a core component of the European project LHDL (). The requirements relate to importing, accessing, visualizing and extracting a part of a very large volume dataset by interactive visual exploration. Such datasets contain billions of voxels and, therefore, several gigabytes are required just to store them, which quickly surpass the virtual address limit of current 32-bit PC platforms. We have implemented a hierarchical, bricked, partition-based, out-of-core strategy to balance the usage of main and external memories. A new indexing scheme is introduced, which permits the use of a multiresolution bricked volume layout with minimum overhead and also supports fast data compression. Using the hierarchy constructed in a pre-processing step, we generate a coarse approximation that provides a preview using direct volume visualization for large-scale datasets. A user can interactively explore the dataset by specifying a region of interest (ROI), which further generates a much more accurate data representation inside the ROI. If even more precise accuracy is needed inside the ROI, nested ROIs are used. The software has been constructed using the Multimod Application Framework, a VTK-based system; however, the approach can be adopted for the other systems in a straightforward way. Experimental results show that the user can interactively explore large volume datasets such as the Visible Human Male/Female (with file sizes of 3.15/12.03 GB, respectively) on a commodity graphics platform, with ease.  相似文献   

19.
The analysis of large graphs plays a prominent role in various fields of research and is relevant in many important application areas. Effective visual analysis of graphs requires appropriate visual presentations in combination with respective user interaction facilities and algorithmic graph analysis methods. How to design appropriate graph analysis systems depends on many factors, including the type of graph describing the data, the analytical task at hand and the applicability of graph analysis methods. The most recent surveys of graph visualization and navigation techniques cover techniques that had been introduced until 2000 or concentrate only on graph layouts published until 2002. Recently, new techniques have been developed covering a broader range of graph types, such as time‐varying graphs. Also, in accordance with ever growing amounts of graph‐structured data becoming available, the inclusion of algorithmic graph analysis and interaction techniques becomes increasingly important. In this State‐of‐the‐Art Report, we survey available techniques for the visual analysis of large graphs. Our review first considers graph visualization techniques according to the type of graphs supported. The visualization techniques form the basis for the presentation of interaction approaches suitable for visual graph exploration. As an important component of visual graph analysis, we discuss various graph algorithmic aspects useful for the different stages of the visual graph analysis process. We also present main open research challenges in this field.  相似文献   

20.
Interactive visual analysis of a patient’s anatomy by means of computer-generated 3D imagery is crucial for diagnosis, pre-operative planning, and surgical training. The task of visualization is no longer limited to producing images at interactive rates, but also includes the guided extraction of significant features to assist the user in the data exploration process. An effective visualization module has to perform a problem-specific abstraction of the dataset, leading to a more compact and hence more efficient visual representation. Moreover, many medical applications, such as surgical training simulators and pre-operative planning for plastic and reconstructive surgery, require the visualization of datasets that are dynamically modified or even generated by a physics-based simulation engine.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号