Since 18 of December 2019 conferences.iaea.org uses Nucleus credentials. Visit our help pages for information on how to Register and Sign-in using Nucleus.

5–8 Jul 2021
Europe/Vienna timezone
The meeting will take place virtually. Information on remote participation will be sent to all in due time.

Reinforcement learning for building nuclear fusion classifiers from scratch & Deep learning models to generate realistic new data in nuclear fusion

8 Jul 2021, 14:30
10m
Oral Advanced Computing and Massive Data Analysis Artificial intelligence

Speakers

Gonzalo Farias (Pontificia Universidad Catolica de Valparaiso)Mr Ricardo Correa (Pontificia Universidad Católica de Valparaiso)Ms Heilym Ramirez (Pontificia Universidad Católica de Valparaiso)

Description

Data-driven models are aimed on finding complex relation among different quantities but without taking into account the physical mechanisms that are responsible for an underlying behaviour. From an engineering point of view, this is not an issue whenever the detection is correct. However, such models require large databases for training purposes. The requirement of a high number of observations to build these models is an important problem in the view of ITER and the next generation tokamak DEMO. For instance, ITER and DEMO cannot wait for thousands of discharges (i.e., after a whole campaign) to have a reliable disruption predictor system. Therefore, one important challenge is to develop data-driven models from scratch.
Developing pattern classifiers under data-scarce condition (from scratch) is related to the study of training data-driven models with unbalanced data. In the unbalanced data classification problem, the number of observations is very different between the classes. This issue can be solved considering some techniques described in the literature by increasing the number of samples of the minority class (such as SMOTE), but unfortunately such techniques increase the number of false positives rapidly, which dissuades their use contemplating the ITER requirements in terms of false alarm rates. That is why we need to study new approaches to build models from scratch. In this work we propose to build data-driven models with scarcity nuclear fusion databases by using reinforcement learning.
Reinforcement learning (RL) is an approach through which intelligent systems, called agents, are trained to constantly adapt to the environment. This training is done by given positive (negative) feedback in the case that performance of the agent is correct (incorrect). Unlike the traditional machine learning algorithms, RL training is based on regards or punishments to learn best actions to take. Such interaction does not happen with the training data, but with the environment through the optimization of an objective function (reward function).
In our approach, the RL training of the model considers a cost function that takes into account the correct classification (i.e., hits for positive and negative samples) as positive feedback. On the contrary, the pattern classification system (the agent), will receive a negative feedback (punishment) when it outputs misclassification. In order to validate the model, we have selected the image dataset from Thomson Scattering (TS) diagnostic of the TJ-II stellarator. The TS diagnostic provides temperature and density profiles by using five different classes of images, which capture spectra of laser light scattered by plasma under different conditions. The results show that RL is simple way to build pattern classifiers with scarce data in nuclear fusion.


Acquiring new data in nuclear fusion devices is expensive. For many reasons the accessing to new discharges is an important issue, particularly because every shot is a physical experiment that requires a considerable number of resources (economics, humans, materials, instrumentations, and time among others). That is why, it would be interesting to study the possibility of building probabilistic models that learn to generate new fusion data, considering existing experimental data and the relationships between the signals to be generated.
In this work, we propose to build a data generator of nuclear fusion databases by the application of generative deep learning models. A generative model describes how datasets is generated, in terms of a probabilistic model. By sampling from such model, it is able to generate new and realistic data.
It is important to notice that the generative models are probabilistic instead of deterministic. If the model simply computes the average of existing data, it should not be considered generative because the model outputs always the same data. Thus, the model should include stochastic or random mechanisms to generate new samples. This leads that building or training a generative model aims to learn (mimic) the unknown probabilistic distribution that explains every observation of the dataset. After the model is trained, it can be used to generate new examples that follows the underlying distribution, but are suitably different from the existing observations of the dataset.
A key idea behind generative modelling is the use of representation learning. This approach aims to describe each observation of the existing data set in a low-dimensional space (called latent space) instead of trying to model the high-dimensional sample space directly. After that, the next step is to learn a function that maps a point from the latent space to the original domain. Both actions, the representation and the mapping function, can be done successfully by using deep learning models.
A variational autoencoders (VAE) is one of the most fundamental and well-known deep learning architectures for generative modelling. An autoencoder is a network trained to compress (encode) data to a latent space, and with the ability to perform a reconstruction (decode) the original input from such low-dimensional domain. In theory, any point at the latent space could be used to generate new data.
The article describes a preliminary study of deep learning generative models to generate new samples from an existing dataset. Particularly, a variational autoencoders has been tested. In order to validate the generative model, we have selected the image dataset from Thomson Scattering (TS) diagnostic of the TJ-II stellarator. The TS diagnostic
provides temperature and density profiles by using five different classes of images, which capture spectra of laser light scattered by plasma under different conditions. The use of VAE models could be in theory extended to generate other kind of nuclear fusion data such as waveforms.

Speaker's Affiliation Pontificia Universidad Católica de Valparaiso, Valparaiso
Member State or IGO Chile

Primary authors

Gonzalo Farias (Pontificia Universidad Catolica de Valparaiso) Mr Diego Hidalgo (Pontificia Universidad Católica de Valparaiso) Ms Sara Cuellar (Pontificia Universidad Católica de Valparaiso) Dr Ernesto Fabregas (UNED) Francisco Esquembre (Universidad de Murcia) Prof. Sebastián Dormido-Canto (UNED) Jesús Vega (CIEMAT) Dr Ignacio Pastor (CIEMAT) Mr Ricardo Correa (Pontificia Universidad Católica de Valparaiso) Ms Heilym Ramirez (Pontificia Universidad Católica de Valparaiso)

Presentation materials