Since 18 of December 2019 uses Nucleus credentials. Visit our help pages for information on how to Register and Sign-in using Nucleus.
13-17 May 2019
Daejeon, Republic of Korea
Europe/Vienna timezone
Meeting Material is now available and accessible from the left-menu

Graphic interactive environment for remote data analysis and visualization with a view on ITER

16 May 2019, 13:30
2h 30m
Daejeon, Republic of Korea

Daejeon, Republic of Korea

Board: P/5-1
Poster Remote Participation and Virtual Laboratory Poster


Dr Ernesto Fabregas (Universidad Nacional de Educación a Distancia)


Each ITER discharge (30 minutes long) is expected to produce O(10**5) signals. This vast quantity of data must be stored and analyzed using computers with large storage capacity and fast processing units. Also, it is important to note that these powerful computers and data centers will be usually situated in remote locations from the scientists. Both access to data and some type of analysis may also be restricted by security issues. On the other hand, scientists must be able to explore the data with great flexibility and to have available diverse possibilities for analysis and visualization. For general purposes, several suitable platforms exist, providing a large variety of libraries for this work. Furthermore, the fusion community is constantly producing new techniques and algorithms, specific to this topic. Learning all the programming involved and keeping track of new developments can be a very time-demanding task.

We introduce a client-server software tool that separates the tasks of handling data, coding algorithms, doing the computation and preparing visualizations (handled by the server and specialized software engineers), from those of conducting the exploration, designing the analysis and interpreting the results (reserved for fusion specialists).

A web-accessible server, run by the institution hosting the data and offering the service, grants access to authenticated scientists and provides a variety of fusion-specific software algorithms for handling, processing, analyzing and visualizing the data. These algorithms are encapsulated in visual elements with customizable parameters that can be connected to create a graph defining a flow of data. Once the graph is completed, the server runs the (automatically generated) underlying code to produce the analysis, returning an HTML page with results.

A modern HTML client, runnable on different devices, allows scientists to connect to this service remotely. The user builds the graph selecting among the well-documented elements (elements and data offered may depend on the task or security access), adjusting their properties and connecting the output of one element with the input of another to form the correct data flow. But this task does not require writing code, relieving scientists from knowing the precise syntax and all the possible parameters for a given algorithm, and minimizing syntax errors which may result in wasted execution time.

Once the graph is completed, scientists click the run button and the server does the rest. Users can even disconnect and wait for a server message when the process has completed, receiving the results in HTML format. Repeating the process with different parameters or adding new elements to improve it only requires editing the graph, not a long code.

We show our tool and display examples of non-trivial analysis of data using simple graphs of elements, producing advanced analysis and visualizations with no (client-side) programming. In particular, we present the results of image classification from the TJ-II Thomson scattering, which implements a five-class classifier.

Primary authors

Dr Francisco Esquembre (Universidad de Murcia) Dr Sebastián Dormido-Canto (UNED) Dr Jesús Vega (Centro de Investigaciones Energéticas, Medioambientales y Tecnológicas) Dr Gonzalo Farias (Pontificia Universidad Católica de Valparaiso) Dr Jesús Chacón (Universidad Complutense) Dr Ernesto Fabregas (Universidad Nacional de Educación a Distancia)

Presentation Materials