Visual transformation for interactive spatiotemporal data mining

作者:Yang Cai, Richard Stumpf, Timothy Wynne, Michelle Tomlinson, Daniel Sai Ho Chung, Xavier Boutonnier, Matthias Ihmig, Rafael Franco, Nathaniel Bauernfeind

摘要

Analytical models intend to reveal inner structure, dynamics, or relationship of things. However, they are not necessarily intuitive to humans. Conventional scientific visualization methods are intuitive, but limited by depth, dimension, and resolution. The purpose of this study is to bridge the gap with transformation algorithms for mapping the data from an abstract space to an intuitive one, which include shape correlation, periodicity, multiphysics, and spatial Bayesian. We tested this approach with the oceanographic case study. We found that the interactive visualization increases robustness in object tracking and positive detection accuracy in object prediction. We also found that the interactive method enables the user to process the image data at less than 1 min per image versus 30 min per image manually. As a result, our test system can handle at least 10 times more data sets than traditional manual analyses. The results also suggest that minimal human interactions with appropriate computational transformations or cues may significantly increase the overall productivity.

论文关键词:Vision, Visualization, Interaction, Data Mining

论文评审过程:

论文官网地址:https://doi.org/10.1007/s10115-007-0075-5