TVCG Session on Data Visualization

  • Full Conference Pass (FC) Full Conference Pass (FC)
  • Full Conference One-Day Pass (1D) Full Conference One-Day Pass (1D)
  • Basic Conference Pass (BC) Basic Conference Pass (BC)
  • Student One-Day Pass (SP) Student One-Day Pass (SP)

Date: Thursday, December 6th
Time: 2:15pm - 2:41pm
Venue: G402 (4F, Glass Building)


Summary: Traditional fisheye views for exploring large graphs introduce substantial distortions that often lead to a decreased readability of paths and other interesting structures. To overcome these problems, we propose a framework for structure-aware fisheye views. Using edge orientations as constraints for graph layout optimization allows us not only to reduce spatial and temporal distortions during fisheye zooms, but also to improve the readability of the graph structure. Furthermore, the framework enables us to optimize fisheye lenses towards specific tasks and design a family of new lenses: polyfocal, cluster, and path lenses. A GPU implementation lets us process large graphs with up to 15,000 nodes at interactive rates. A comprehensive evaluation, a user study, and two case studies demonstrate that our structure-aware fisheye views improve layout readability and user performance.

Author(s)/Speaker(s):
Moderator: Lecturer(s):

Author(s)/Speaker(s) Bio:
Yanyan Wang is a master student from Shandong University, supervised by Yunhai Wang and Baoquan Chen. She is interested in graph visualization. She has 2 TVCG papers about graph visualization.

Date: Thursday, December 6th
Time: 2:41pm - 3:07pm
Venue: G402 (4F, Glass Building)


Summary: In the past decade, we have seen many successful applications of recurrent neural networks (RNNs) on electronic medical records (EMRs), which contain histories of patients' diagnoses, medications, and other various events, in order to predict the current and future states of patients. Despite the strong performance of RNNs, it is often very challenging for users to understand why the model makes a particular prediction. Such black box nature of RNNs can impede its wide adoption in clinical practice. Furthermore, we have no established method to interactively leverage users' domain expertise and prior knowledge as inputs for steering the model. Therefore, our design study aims to provide a visual analytics solution to increase interpretability and interactivity of RNNs via a joint effort of medical experts, artificial intelligence scientists, and visual analytics researchers. Following the iterative design process between the experts, we design, implement, and evaluate a visual analytics tool called RetainVis, which couples a recently proposed, interpretable RNN-based model called RETAIN and visualizations for users' exploration of EMR data in the context of prediction tasks. Our study shows the effective use of RetainVis for gaining insights into how RNN models EMR data, using real medical records of patients with heart failure, cataract, or dermatological symptoms. Our study also demonstrates how we made substantial changes to the state-of-the-art RNN model called RETAIN in order to make use of temporal information and increase interactivity. This study will provide a useful guideline for researchers who aim to design more interpretable and interactive visual analytics tool for RNNs.

Author(s)/Speaker(s):
Moderator: Lecturer(s):

Author(s)/Speaker(s) Bio:
Jaegul Choo (https://sites.google.com/site/jaegulchoo/ ) is an assistant professor in the Dept. of Computer Science and Engineering at Korea University. He has been a research scientist at Georgia Tech from 2011 to 2015, where he also received M.S in 2009 and Ph.D in 2013. His research covers broad areas including data mining, machine learning, visual analytics, computer vision, and natural language processing, and his work has been published in premier venues such as KDD, WWW, WSDM, CVPR, ECCV, EMNLP, AAAI, IJCAI, ICDM, ICWSM, SDM, TKDD, DMKD, KAIS, IEEE VIS, EuroVIS, CHI, TVCG, CFG, and CG&A. He earned the Best Student Paper Award at ICDM in 2016, the NAVER Young Faculty Award in 2015, the Outstanding Research Scientist Award at Georgia Tech in 2015, and the Best Poster Award at IEEE VAST (as part of IEEE VIS) in 2014.

Date: Thursday, December 6th
Time: 3:07pm - 3:33pm
Venue: G402 (4F, Glass Building)


Summary: We propose a dynamically load-balanced algorithm for parallel particle tracing, which periodically attempts to evenly redistribute particles across processes based on k-d tree decomposition. Each process is assigned with (1) a statically partitioned, axis-aligned data block that partially overlaps with neighboring blocks in other processes and (2) a dynamically determined k-d tree leaf node that bounds the active particles for computation; the bounds of the k-d tree nodes are constrained by the geometries of data blocks. Given a certain degree of overlap between blocks, our method can balance the number of particles as much as possible. Compared with other load-balancing algorithms for parallel particle tracing, the proposed method does not require any preanalysis, does not use any heuristics based on flow features, does not make any assumptions about seed distribution, does not move any data blocks during the run, and does not need any master process for work redistribution. Based on a comprehensive performance study up to 8K processes on a Blue Gene/Q system, the proposed algorithm outperforms baseline approaches in both load balance and scalability on various flow visualization and analysis problems.

Author(s)/Speaker(s):
Moderator: Lecturer(s):

Author(s)/Speaker(s) Bio:
Xiaoru Yuan is a tenured faculty member in the School of Electronics Engineering and Computer Science. He serviced as the vice director of Information Science Center at Peking University. He received the Ph.D. degree in computer science in 2006, from the University of Minnesota at Twin Cities. His primary research interests are in the field of visualization and visual analytics. His co-authored work on high dynamic range volume visualization received Best Application Paper Award at the IEEE Visualization 2005 conference. He and his student team won awards over 11 times in IEEE VAST Challenges.

Date: Thursday, December 6th
Time: 3:33pm - 3:59pm
Venue: G402 (4F, Glass Building)


Summary: SRVis: Towards Better Spatial Integration in Ranking Visualization Interactive ranking techniques have substantially promoted analysts' ability in making judicious and informed decisions effectively based on multiple criteria. However, the existing techniques cannot satisfactorily support the analysis tasks involved in ranking large-scale spatial alternatives, such as selecting optimal locations for chain stores, where the complex spatial contexts involved are essential to the decision-making process. Limitations observed in the prior attempts of integrating rankings with spatial contexts motivate us to develop a context-integrated visual ranking technique. Based on a set of generic design requirements we summarized by collaborating with domain experts, we propose SRVis, a novel spatial ranking visualization technique that supports efficient spatial multi-criteria decision-making processes by addressing three major challenges in the aforementioned context integration, namely, a) the presentation of spatial rankings and contexts, b) the scalability of rankings' visual representations, and c) the analysis of context-integrated spatial rankings. Specifically, we encode massive rankings and their cause with scalable matrix-based visualizations and stacked bar charts based on a novel two-phase optimization framework that minimizes the information loss, and the flexible spatial filtering and intuitive comparative analysis are adopted to enable the in-depth evaluation of the rankings and assist users in selecting the best spatial alternative. The effectiveness of the proposed technique has been evaluated and demonstrated with an empirical study of optimization methods, two case studies, and expert interviews.

Author(s)/Speaker(s):
Moderator: Lecturer(s):

Author(s)/Speaker(s) Bio:
Di Weng is a third-year Ph.D. student from State Key Lab of CAD&CG, Zhejiang University. Supervised by Prof. Yingcai Wu, he has participated in many urban visualization research projects, and so far he has published one first-author CHI paper and two first- and second-author TVCG papers. His research interest mainly lies in the visual analytics of location selection with large-scale urban data.

 

Back

/jp/attendees/doctoral-consortium /jp/attendees/production-gallery