Visual Data Science and its role in Computational Medicine
Delft Data Science Seminar

Tuesday the 6th of February 10:00-17:00
Hall K, Faculty of Aerospace and Engineering Building 62
For a route description click here
Please register here.
The Delft Data Science Seminar: Visual Data Science and its role in Computational Medicine is the newest edition in the series of Delft Data Science seminars organized at TU Delft. These seminars address the challenges and opportunities emerging from large quantities of heterogeneous, complex, networked and dynamic data influencing virtually all socio-economic domains.
We from the Computer Graphics and Visualization Group of TU Delft Data Science, in cooperation with prof. Helwig Hauser from the Univ. of Bergen (Norway), organise this one-day workshop where evolutionary new opportunities in data science and technology are combined in visualisation and medicine in methods such as neuroimaging and machine learning.
We are glad that eight high-profile speakers have accepted our invitation to talk at this workshop. They will comment on a variety of topics, including visual data science vs. classical science, on computational vs. interactive approaches and on the role of the human in visual data science
A panel discussion accounts for the now starting, still open-ended discussion about important questions with the ultimate goal of stimulating further, subsequent exchange and possibly also the start of new scientific cooperation in this exciting new field of research.
Join us on Tuesday the 6th of February from 10:00-17:00 for the Delft Data Science Seminar – Visual Data Science and its role in Computational Medicine in Hall K of the Aerospace Engineering Faculty, building 62.
You can register here.
Workshop program:
09h30 welcome & registration
10h00 opening (DDS / TU Delft, Univ. of Bergen)
10h10 session 1: computational & visual solutions in biomedicine (chair: H. Hauser)
Arvid Lundervold on Computational medicine and machine learning –
opportunities and challengesBoudewijn Lelieveldt on Visual analytics for spatially resolved -omics data:
from single cell to tissue and back
11h10 coffee break
11h30 session 2: neuroimaging and visual data science (chair: A. Vilanova)
Wiro Niessen on Biomedical Imaging and Genetic (BIG) Data Analytics,
applications in dementia and oncologyJos Roerdink on Computational and visual analysis of brain data
12h30 lunch
13h30 session 3: data science with statistics & machine learning (chair: E. Eisemann)
Peter Filzmoser on Robust elastic net (logistic) regression for high dimensional data
Jan van Gemert on Active Decision Boundary Annotation with Deep Generative Models
14h30 coffee break
14h50 session 4: visual data science with visualization & visual analytics (chair: H. Hauser)
Jean-Daniel Fekete on The role of visualization in the hypothetico-deductive method
Jack van Wijk on Understanding models: a challenge for visual analytics
16h00 panel discussion
The panel will focus on having different views on relevant aspects of visual data science. Visual data science vs. classical science (indicating the change of paradigm on how to identify hypothesis), on computational vs. interactive visual approaches (challenges and opportunities), on the role of the human in visual data science.
Panelists:
- Boudewijn Lelieveldt
- Arvid Lundervold
- Jean-Daniel Fekete
- Erik Tews (Univeristy of Twente) (https://people.utwente.nl/e.tews )
16h45 closing (DDS / TU Delft, Univ. of Bergen)
17h00 drinks
Prof. Arvid Lundervold, Bergen
Talk title: Computational medicine and machine learning – opportunities and challenges
Abstract: “Computational medicine” (CM) is a new field of science that can be defined as the application of methods from engineering, mathematics, and computational sciences to improve our understanding and treatment of human disease. Medicine is increasingly a target for the relatively new fields of Data Science (‘producing insights’), Machine Learning (‘producing predictions’) and Artificial Intelligence (‘producing actions’), and these disciplines are becoming important components of CM. CM is characterized by being multi-scale (molecule to man, microseconds to years), sub-specialized (from computational anatomy to computational psychiatry), often dealing with heterogeneous, longitudinal, and high dimensional data, and addressing high-content, high-throughput data from DNA sequencers and imaging scanners as well as data from bio-banks and registers. CM is employing an impressive range of mathematical, statistical, and computational methods, and has a huge potential within personalized medicine, disease prevention, and therapy. In this talk I will give some perspectives on the opportunities of CM and ML, partly illustrated from our own research within imaging, and also discuss some of the challenges regarding adoption and use in medical practice, infrastructure issues, training and education, validation and performance measures, and the importance of open science and reproducible research in the field.
Prof. Boudewijn Lelieveldt, Leiden
Talk title: Visual analytics for spatially resolved -omics data: from single cell to tissue and back
Abstract: This presentation discusses novel visual analytics techniques for spatially resolved and single cell -omics data. Focusing on the non-linear embedding technique tSNE, we developed Dual tSNE and linked-view tSNE to enable fast and interactive identification of functionally interesting gene sets in relation to brain regions from the Allen Brain Atlases. Moreover, we developed spatially mapped tSNE that integrates spatial image information in the tSNE map analysis. Finally, we developed Hierarchical Stochastic Neighbor Embedding, that scales to millions of cells. Applications of these techniques will be highlighted in three application domains: 1) the web-portal Brainscope.nl enables mining the adult and developmental Allen Human Brain atlases through linked, all-in-one visualization of genes and samples across the whole brain and genome, 2) the discovery of prognostic molecular biomarkers in cancer from Imaging Mass Spectrometry data, and 3) the PC application Cytosplore for fast and interactive immune phenotyping of single cell -omics data, enabling the identification of rare, disease associated cell types at full data resolution.
Prof. Wiro Niessen, Rotterdam
T: Biomedical Imaging and Genetic (BIG) Data Analytics, applications in dementia and oncology
Abstract: Big data are dramatically increasing the possibilities for prevention, cure and care, and changing the landscape of the healthcare system. In this presentation I will show examples of possible large benefits of big data analytics in healthcare. As a first example I will address the challenge of dementia. With the ageing society, there is an urgent need to develop new preventive and therapeutic strategies for common age-related diseases, such as Alzheimer’s disease, the most common form of dementia. Neuroimaging plays an increasingly important role here, as it helps in understanding disease etiology and diagnosing different forms of dementia. In this presentation I will show how largescale data analytics in longitudinal population neuroimaging studies, especially when combining imaging with other clinical, biomedical and genetic data, provides a unique angle to study the brain, both in normal ageing and disease. I will also show how it can be the basis of new methods for disease detection, diagnosis, and prognosis in clinical practice. As a second example I will show how radiomics approaches can be used to improve tumor characterization and therapy selection and guidance in oncology. Finally, I will briefly discuss some of the promises and challenges of using deep learning in the field of image analysis and imaging genetics.
Prof. Jos Roerdink, Groningen
Talk title: Computational and visual analysis of brain data
Abstract: In this talk, I will discuss some recent work on the visualization and analysis of brain patterns obtained from neuroimaging data, and their use in the understanding of brain (mal)functioning. The first part of the talk is devoted to the GLIMPS ("GLucose IMaging in ParkinsonismS") project at the University of Groningen. This project concerns the creation of a national database of FDG-PET scans which reflect the glucose consumption of the brain in patients with neuro-degenerative diseases. A combination of visualization and machine learning methods has been developed for associating brain patterns to various types and stages of neuro-degenerative disease. Results will be shown and discussed. In the second part of the talk I will discuss the visualization and analysis of brain coherence networks extracted from multichannel EEG recordings, both for static and dynamic networks.
URL: http://www.cs.rug.nl/svcg/
Prof. Peter Filzmoser, Wien
Talk title: Robust elastic net (logistic) regression for high dimensional data
Abstract: Fully robust versions of the elastic net estimator are introduced for linear and logistic regression. The algorithms to compute the estimators are based on the idea of repeatedly applying the non-robust classical estimators to data subsets only. It is shown how outlier-free subsets can be identified efficiently, and how appropriate tuning parameters for the elastic net penalties can be selected. A final reweighting step improves the efficiency of the estimators. Simulation studies compare with non-robust and other competing robust estimators and reveal the superiority of the newly proposed methods. This is also supported by a reasonable computation time and by good performance in real data examples.
Note: This is joint work with
F.S. Kurnaz, Yildiz Technical University, Turkey
I. Hoffmann, Vienna University of Technology, Austria
Prof. Jan Van Gemert, Delft
Talk title: Active Decision Boundary Annotation with Deep Generative Models
Abstract: This talk is on active learning where the goal is to reduce the data annotation burden by visually interacting with a (human) oracle during training. Standard active learning methods ask the oracle to annotate data samples. Instead, we take a profoundly different approach: we ask for annotations of the decision boundary. We achieve this using a deep generative model to create novel instances along a 1d line. A point on the decision boundary is revealed where the instances change class. Experimentally we show on three data sets that our method can be plugged into other active learning schemes, that human oracles can effectively annotate points on the decision boundary, that our method is robust to annotation noise, and that decision boundary annotations improve over annotating data samples.
Reference: http://jvgemert.github.io/pub/huijserICCV17ActiveBoundAnnoGAN.pdf
Prof. Jean-Daniel Fekete, Paris
Talk title: The role of visualization in the hypothetico-deductive method
Abstract: Visualization is becoming more and more popular in the natural sciences, due to its effectiveness at exploring data and revealing insights. Yet, its role is not clear within the standard hypothetico-deductive method where a hypothesis should be present at the starting point of the scientific investigation. I will show examples of some well-crafted visualizations that exhibit noticeable visual patterns indicating meaningful properties in the data, and how unexpected properties can become starting hypotheses to validate. Recognizing and clarifying the role of visualization as a proper tool to help researchers generate hypotheses from data is an important step towards its acceptance in the natural science, limiting some biases from researchers, but also requiring care to avoid misinterpreting artifacts. My talk will show many examples, sometimes using non-standard visualizations which, although needing a bit of time to learn, can reveal patterns on unexplored aspects of data. I will make my best to convince the audience that the effectiveness of visualization to generate hypotheses is a strong argument for increasing the visualization literacy of researchers in the natural sciences.
Prof. Jack van Wijk, Eindhoven
Talk title: Understanding models: a challenge for visual analytics
Abstract: Visual analytics concerns integration of automated and visual methods, such that we can take advantage of the strengths of humans and machines. The concept is great, but there are still many challenges ahead. An important one is to understand how more or less sophisticated models come to conclusions on the data analyzed. Algorithms are used increasingly for many applications to make decisions and to support decision makers. Can we understand and trust the outcome? What reasoning has been followed, what data has been used? I will show a number of examples of work of my group to make black boxes more transparent: on visualization of decision trees, on explanation of recommendations for coast guards, and on fraud detection for banks, but the overall challenge is still wide open.
Please register here.