Since 18 of December 2019 conferences.iaea.org uses Nucleus credentials. Visit our help pages for information on how to Register and Sign-in using Nucleus.

Fifth Technical Meeting on Fusion Data Processing, Validation and Analysis

Europe/Vienna
Ghent University, Ghent, Belgium

Ghent University, Ghent, Belgium

Description

KEY DEADLINES

19 April 2023 Deadline for submission of abstracts through IAEA-INDICO for regular contributions

21 April 2023 Deadline for submission of Participation Form (Form A), and Grant Application Form (Form C) (if applicable) through the official channels

28 April 2023 Notification of acceptance of abstracts


The validation and analysis of experimental data obtained from diagnostics used to characterize fusion plasmas are crucial for a knowledge-based understanding of the physical processes governing the dynamics of these plasmas. This meeting aims at fostering, in particular, discussions about research and development results that set out or underline trends observed in the current major fusion confinement devices. Accurate data processing leads to a better understanding of the physics related to fusion research. It is essential for the careful estimate of the error bars of the raw measurements and processed data.

Objectives

The objective of the meeting is to provide a platform during which a set of topics relevant to fusion data processing, validation and analysis are discussed with the view of extrapolation needs to next step fusion devices such as ITER.

Target Audience

This event is suitable for experienced scientists and young scientists working in the domain of plasma diagnostics and synthetic diagnostics data analysis.

    • 09:00 09:30
      O/1 Welcome and opening address
      Conveners: Danas Ridikas (IAEA), Min Xu (Southwestern Institute of Physics), Didier Mazon (CEA Cadarache), Geert Verdoolaege (Ghent University)
    • 09:30 10:10
      NSC/1 Next step/new fusion device concepts: data challenges and design optimization: Session 1
      Conveners: Didier Mazon (CEA Cadarache), Dr Pablo Rodriguez-Fernandez (UsPSFC)
      • 09:30
        The Direct Optimization Framework in Stellarator Design: Transport and Turbulence Optimization 40m

        When it comes to magnetic confinement nuclear fusion, high-quality magnetic fields are crucial for sustaining high-heat plasmas and managing plasma density, fast particles, and turbulence. Transport and turbulence are particularly important factors in this process. Traditional designs of stellarator machines, like those seen in the HSX and W7-X experiments, typically optimize magnetic fields and coils separately. This approach can result in limited engineering tolerances and often overlooks turbulent transport during the optimization process. Moreover, the process is highly dependent on the initial conditions, requiring multiple restarts with relaxed requirements, which can make it inefficient and compromise the optimal balance between alpha particles, neoclassical transport, and turbulence. However, recent breakthroughs in the optimization of stellarator devices are able to overcome such barriers. Direct near-axis designs, integrated plasma-coil optimization algorithms, precise quasisymmetric and quasi-isodynamic fields, and direct turbulence optimization are among the innovations that are revolutionizing the way these machines are designed. By taking into account transport and turbulence from the start, these advancements allow for more efficient fusion devices and greater control over the plasma. In this presentation, we will discuss the main outcomes of these advancements and the prospects for even more efficient and effective fusion devices.

        Speaker: Rogerio Jorge (IST Lisbon)
    • 10:10 10:30
      Coffee Break 20m
    • 10:30 12:00
      NSC/1 Next step/new fusion device concepts: data challenges and design optimization
      Conveners: Didier Mazon (CEA Cadarache), Dr Pablo Rodriguez-Fernandez (UsPSFC)
      • 10:30
        Bayesian optimization techniques to accelerate burning-plasma and reactor simulations 30m

        The design of optimized, commercially-attractive reactors requires careful understanding of the core plasma physics and the development of accurate predictive frameworks. Historically, first-principles gyrokinetic turbulence simulations were too expensive to be used in predictive workflows, as they often required hundreds or thousands of evaluations to reach multi-channel steady-state or flux-matching conditions. Consequently, physics-based predictions of burning plasmas and future reactors were made with quasilinear models of turbulence, and hence the fidelity of those predictions depended on the quality of the quasilinear assumption in the plasma regime of interest and the saturation rule used to map linear results to nonlinear transport fluxes. In this work, we exploit the benefits of Bayesian optimization and Gaussian processes for the optimization of expensive, black-box functions. The PORTALS framework [1] is capable of producing multi-channel, flux-matched profile predictions of core plasmas with a minimal number of expensive gyrokinetic simulations, usually less than 15 iterations. Thanks to the speed-up achieved in PORTALS, predictions of burning plasmas in SPARC [2] and ITER [3] have been possible with fully nonlinear gyrokinetic simulations using the CGYRO [4] code. These high-fidelity core plasma simulations help us build confidence in the performance predictions for these net-gain devices and can inform the planning of experimental campaigns to achieve peformance goals. The utilization of efficient Bayesian optimization techniques during the design stage of new experiments and fusion power plants can help find optimal operational regimes and engineering parameters to realize economically-attractive commercial fusion energy.

        [1] P. Rodriguez-Fernandez et al. Nucl. Fusion 62 076036 (2022).
        [2] A.J. Creely et al. Journal of Plasma Physics 86, 5 (2020).
        [3] P. Mantica et al. Plasma Phys. Control. Fusion 62 014021 (2020).
        [4] J. Candy et al. J. Comput. Phys. 324 73–93 (2016).

        This work was funded by Commonwealth Fusion Systems (RPP020) and US DoE (DE-SC0017992, DE-SC0014264, DE-AC02–05CH11231, DE-SC0023108).

        Speaker: Dr Pablo Rodriguez-Fernandez (UsPSFC)
      • 11:00
        High-field laser patterned HTS magnets enabling compact fusion reactors 30m

        Magnetic nuclear fusion reactors such as tokamaks and stellarators rely on superconducting coils (also called magnets) to confine and shape the plasma in which the fusion reactions occur. Stellarators are fusion devices based on three-dimensional plasma shapes which enable steady state and more stable operations compared with tokamaks. Nevertheless, the complex stellarator plasma shape demands extremely sophisticated coil designs, optimized to achieve the target magnetic configuration with high precision. In addition, field strength in this kind of reactor also plays a major role in fusion performance and economic viability. Indeed, by increasing the field strength, one can reduce the plasma volume and thus achieve more compact reactors. The highly specialized technical expertise as well as the far-reaching requirements in human and material resources necessary to design, manufacture and assembly superconducting magnets combining high field strengths and high precision tend to delay the development of economically viable fusion reactors. For this reason, the development of small-scale steady state fusion reactors is intimately dependent on technological innovations in superconducting magnets.

        Renaissance Fusion is a nuclear fusion start-up developing a novel technology for High Temperature Superconductor (HTS) coils applied to stellarators. The design, manufacturing and assembly of these coils is drastically facilitated by (1) simplified coil-winding-surfaces and (2) wide laser patterned HTS foils. Indeed, in order to obtain the current distribution necessary to produce the required stellarator plasma-confining magnetic fields, laser-ablated grooves will geometrically constrain the currents flowing through the superconducting coils.

        This work addresses the development of a computation design tool that optimizes grooving patterns necessary to produce a specific target field. As a first step towards reproducing a stellarator magnetic configuration, our grooving pattern design tool was applied to axisymmetric fields within circular cylindrical open magnets. A least squares resolution combined with Tikhonov regularization was implemented to solve the inverse problem. Two reduced-scale test cases are presented: a uniform MRI field and a gyrotron field profile. For both cases, the grooved conductor reproduced the target field profile with the required precision, demonstrating the potential of this approach for simplifying the design of complex magnets.

        Speaker: Diego PEREIRA BOTELHO (Renaissance Fusion)
      • 11:30
        Development of High-speed Data Acquisition System of Negative Ion Source Breakdown 30m

        Due to a number of factors, long pulse experiments conducted under Negative ion source based Netutral Beam Injection (NNBI) are not always stable. The occurrence of breakdown can lead to damage of the ion source device. Currently, low-speed data acquisition (DAQ) systems operating in NNBI have low sampling rates, which make it difficult to accurately characterize the changes of each key electrical signal at that moment. Therefore, it is not conducive for researchers to analyze the causes based on the sampling data. To solve this problem, a high-speed DAQ system based on random trigger method, pre-trigger method, multi-threading and MDSplus database is proposed, especially for acquiring the instantaneous electrical signals before and after the time axis of the system in the fault state. In terms of signal anti-interference processing, this system adopts high-speed voltage-to-frequency (VF) and frequency-to-voltage (FV) conversion technologies to achieve isolated transmission of field signals. In terms of software, this system is developed based on C# to realize each module, mainly including data acquisition, data playback, and data storage. In addition, multi-threading techniques are employed to achieve unification of high-speed and low-speed DAQ systems on the time axis. The current sampling rate of the system can reach up to 2M Sa/s, which can realize high-precision data acquisition of key experimental parameters such as the current and voltage data generated during NNBI experiment, so it can provide data support for researchers to precisely analyze the possible causes of system failures, and also establish corresponding data samples for fault prediction implemented based on artificial intelligence or some other research directions. At present, current and voltage signals of acceleration grid power supply and extraction grid power supply have been connected to this system. According to experimental needs, more signals will be added to this system subsequently to provide reliable data sources for physical analysis of NNBI.

        Speaker: Prof. Yuanzhe Zhao (ASIPP)
    • 12:00 13:30
      Lunch Breack 1h 30m
    • 13:30 15:10
      IDA/1 Integrated data analysis and synthetic diagnostics: Session 2
      Conveners: Geert Verdoolaege (Ghent University), Rainer Fischer
      • 13:30
        Status and Prospects of Integrated Data Analysis for Present and Future Fusion Devices 30m

        For machine control and safety as well as physics studies, present and future fusion devices have to analyse a huge amount of measurements coming from many redundant and complementary diagnostics. Integrated Data Analysis (IDA) in the framework of Bayesian probability theory provides a concept to analyse a coherent combination of measured data from heterogeneous diagnostics including their statistical and systematic uncertainties and to combine them with modelling information.

        Based on more than 20 years of experience in applying IDA at various fusion devices, a generic IDA code package was recently developed to provide a modular and flexible basic python code to be applied to present and next generation fusion devices. A summary of the IDA ingredients, the status of the newly developed IDA platform, the linkage with the ITER:IMAS data base and recent applications will be presented.

        Speaker: Rainer Fischer (Max Planck Institute for Plasma Physics)
      • 14:00
        Validation of diagnostics for kinetic profiles at ASDEX Upgrade using integrated data analysis 40m

        At ASDEX Upgrade (AUG) Integrated Data Analysis (IDA) is used to infer kinetic plasma profiles like electron density by a joint analysis of several heterogeneous diagnostics. A reliable forward model for each diagnostic is essential for the probabilistic approach, linking parameter space to data space, and prediction measurements accurately. The IDA approach enables the identification of systematic differences in profiles estimated from individual diagnostics, in case the observational volumes are overlapping.

        IDA at AUG determines the density profiles based on interferometry, Thomson scattering, lithium beam excitation spectroscopy, thermal helium beam, and swept O-Mode reflectometry. The results for independent diagnostics are not always in agreement, with situational differences beyond the diagnostic uncertainties. These differences can originate from numerous sources like invalid assumptions in the forward models or uncertainties in physical parameters, insufficient calibration of diagnostics or time-dependent drifts, cross-calibration of diagnostics under invalid assumptions, and others.

        The most recent addition to IDA is the reflectometry system, which adds a third independent diagnostic, complementing Thomson scattering and lithium beam, for density profiles with high spatial resolution around the separatrix. Based on this addition, a study on the uncertainties and discrepancies between the three diagnostics on experimental data is presented. Understanding the limitations of diagnostics and their forward models is essential for interpretation and evaluation of experiments, especially when profiles are used as input to large modelling codes.

        Speaker: Dirk Stieglitz (Max Planck Institute for Plasma Physics)
      • 14:40
        Bayesian integrated estimation of tungsten impurity concentration at WEST 30m

        An accurate estimation of impurity concentrations in fusion devices is crucial for understanding impurity transport and controlling impurities. However, this is challenging due to the involvement of multiple diagnostics and their various sources of uncertainties. In this work, we utilize integrated data analysis (IDA) based on Bayesian probabilistic theory to jointly estimate impurity concentrations and kinetic profiles at WEST, using measurements from soft X-ray (SXR), interferometry and electron cyclotron emission (ECE). Compared to taking results from individual diagnostics, IDA has the advantage of exploiting the interdependencies of diagnostics and avoiding error accumulation. To overcome the additional challenge of reconstructing 2D SXR emissivity profile from the single horizontal view at WEST, we use a Gaussian process with a flux-varying length scale. We also investigate techniques for accelerating the inference process towards real-time applications. We demonstrate fast reconstruction results of density profiles obtained by a neural network surrogate model trained on synthetic interferometry data corresponding to realistic profiles. Ultimately, this approach will be extended to the joint estimation of impurity concentration, density and temperature profiles.

        Speaker: Hao Wu (Ghent University)
    • 15:10 15:30
      Coffee Break 20m
    • 15:30 17:30
      NSC/2 Next step/new fusion device concepts: data challenges and design optimization: Session 3
      Conveners: Dr Pablo Rodriguez-Fernandez (UsPSFC), Didier Mazon (CEA Cadarache)
      • 15:30
        From MVPs to full models: a stepwise development of diagnostic forward models in constant support of diagnostic design, data analysis, instrument consistency and discharge modelling on the ST40 tokamak 30m

        The characterization of magnetic-confined-fusion plasmas requires a comprehensive set of diagnostic systems measuring a wide range of parameters. New machines, such as the ST40 high-field spherical tokamak [1], typically start with a small subset and gradually increase the diagnostic suite to include more complex and comprehensive systems. To make the most of each plasma operation phase, forward models of various diagnostic systems have been developed to provide consistency-checks of the instruments being commissioned, aid diagnostic design, develop diagnostic analysis methods, as well as constrain higher-level parameters for discharge modelling.

        Supporting the programmes at pace requires releasing approximate minimum-viable-product (MVP) models early and increasing their complexity in a stepwise manner. Even if limited in scope, including simple models in analysis and modelling chains can provide an extremely valuable contribution to inform decision-making and accelerate learning. Successive releases can then expand the boundary conditions for their application and provide a benchmark for previous versions, informing among other things what approximations can be retained and what complexity is necessary for an accurate (enough) characterization of the measurements.

        This contribution discusses the philosophy behind the framework under development, giving details of the various forward models available to date, which include passive spectroscopy diagnostics [2], an X-ray crystal spectrometer (XRCS) measuring He-like of H-like argon [2, 3], charge-exchange recombination spectroscopy (CXRS) [4], interferometry, filtered visible diodes (e.g. for Bremsstrahlung measurements) [5], Thomson Scattering (TS), bolometric and SXR-filtered diode cameras. The different levels of complexity of the models are examined, analyzing their limitations when run stand-alone or integrated in complex analysis/modelling workflows. Examples of diagnostic design efforts will be presented, as well as results from recent high ion temperature (> 8.6 keV) ST40 plasma discharges reported in [1, 6, 7, 8].

        [1] S.A.M. McNamara et al 2023 Nucl. Fusion 63 054002

        [2] https://open.adas.ac.uk/

        [3] O Marchuk et al 2006 Plasma Phys. Control. Fusion 48 1633

        [4] J. Wood et al 2023 JINST 18 C03019

        [5] S. Morita 1994 IPP III / 1999

        [6] S. M. Kaye et al 2022 APS conference CP11.00016

        [7] P. R. Thomas et al 2022 APS conference YI02.00005

        [8] M. Sertoli et al 2022 APS conference CP11.00014

        Speaker: Marco Sertoli (Tokamak Energy Ltd., 173 Brook Drive, Milton Park, Oxfordshire, OX14 4SD, United Kingdom)
      • 16:00
        X-ray data validation and analysis on the EXL-50 spherical torus 30m

        EXL-50 is a solenoid-free spherical torus that uses electron cyclotron wave (ECW) as the primary heating source. The typical range for plasma density is between 1~10*1018 m-3and the ECW generates superthermal electrons that can be accelerated to high energies by multiple resonance layers in the torus. These high-energy electrons have low collisionality in the low-density plasma and are lost on the limiters, the central pole, and the vacuum vessel wall in large quantities, producing intense thick-target X-rays. These X-rays and their secondary emissions, such as Compton scattering and fluorescence, can dominate the thin-target emissions from the plasma if the diagnostic setup is not carefully designed. To obtain the thin-target bremsstrahlung with soft and hard X-ray pulse height analyzer (PHA) systems on EXL-50, several measures are taken to improve the diagnostic systems, including thick lead shielding and dedicated collimation of light paths. The optical paths of both systems are scanned to estimate the radial emission profile. The X-ray data are compared with theoretical calculations and other diagnostic data and subsequently, a preliminary interpretation is given.

        Speaker: Xianli Huang (Hebei Key Laboratory of Compact Fusion, Langfang 065001, China; ENN Science and Technology Development Co., Ltd., Langfang 065001, China)
      • 16:30
        Constrained Feed-forward Waveforms for Tokamak Plasma Pulse Design 30m

        Design is an iterative, creative process. In the context of plasma scenarios in magnetic confinement devices, a pulse design represents the specification or plan for a future pulse. Generation of these plans requires an understanding of the scenario’s goals, any constraints that restrict the design space, and a list of assumptions that must be made in order to pose a well-formed problem.
        Tools may be used to facilitate the design process by handling rudimentary tasks suited to computation, such as the inference of coil currents given target separatrix shapes or the calculation of machine limits. To be truly useful, these ‘human-in-the-loop’ tools must: run on human time scales such that the designer may ‘effortlessly adjust, improve, and experiment’ [1]; provide intuitive output such that design decisions may be quickly made; and be adequately flexible in their definition such that a design’s goals, constraints, and assumptions, are not overly restricted by the tool in question.
        We present the development of a feed-forward pulse design tool to facilitate the initial design of candidate voltage and current waveforms for a given set of target separatrix shapes. The tool is being designed as an actor within the workflow for the ITER Pulse Design Simulator, currently in the early stages of development, but could be used for this waveform design on any tokamak. At inception designs represent a lump of clay in that they lack form and precision but are very flexible in their scope. It is here that important design decisions are made. As a design matures, higher fidelity tools with longer run-times may be used to refine and verify that goals are achieved, constraints are met, and initial assumptions remain valid. Whilst critical to the overall workflow, it is important to realise that at this point that focus transitions from pulse design to pulse analysis and the ability to effortlessly adjust, improve and experiment is lost.
        This feed-forward pulse design tool presented here is part of the NOVA free-boundary equilibrium code. This code includes the effects of passive conducting structures as well as an automatic treatment of non-linear constraints such as coil force and field limits, and plasma-wall gaps. These features free the designer to concentrate on core design aspects, separating themselves from algorithmic details encoded within NOVA’s computationally light feed-forward pulse design tool.
        A verification of this tool is made via comparisons to mature scenario simulations analysed by the DINA code. Here key design parameters such as a low order description of the plasma separatrix, plasma current, and Ejima coefficient are extracted from DINA outputs and de-featured [2, 3]. These parameters are then given to the NOVA waveform design tool from which voltage and current waveforms are extracted and compared to the source DINA simulations. The computation of a de-featured plasma scenario with a length of ~650 seconds from breakdown to termination requires a wall clock simulation time of ~5 seconds run on a laptop computer.

        Speaker: Simon McIntosh (ITER Organization)
      • 17:00
        Static Performance Prediction of Long-pulse Negative Ion based Neutral Beam Injection Experiment 30m

        Neutral beam injection is now focused on thousands of seconds for the long-pulse experiment. It is of great significance to establish a simple physical calculation model for evaluating the current parameters of the long-pulse Negative Ion based Neutral Beam Injection (NNBI) facility before the experiment for adjusting and setting the experimental parameters of long-pulse negative ion source experiment. Based on the physical characteristics of each key parameter of the ion source, this paper analyzes the experimental data and predicts the static performance of the current NNBI facility through the analyzed data. All NNBI static performance prediction, including data acquisition, data preprocessing, prediction model and delivery of results. Data acquisition is carried out by the historical data before 2022 read locally. The data preprocessing part firstly selects the experimental data according to the corresponding rules, and then uses the maximum and minimum value method to standardize the data. The data set is divided into training set, verification set and test set. The static performance prediction model is established based on back propagation (BP) neural network. The state of the network is determined according to the error convergence curve. Finally, the results are mapped to the normal interval using a disnormalized method. The static performance prediction model can avoid the ineffective shot more effectively and improve the performance of the long-pulse NNBI experiment, and provide a good encouragement for the NNBI dynamic performance preview in the next step.

        Speaker: Dr Yang Li (ASIPP)
    • 09:00 10:10
      DB/1 Information retrieval, dimensionality reduction and visualisation in fusion databases: Session 4
      Conveners: Min Xu (Southwestern Institute of Physics), Joshua Stillerman (MIT Plasma Science and Fusion Center)
      • 09:00
        MINT, ITER Interactive Data Visualization Tool 40m

        MINT (Make Informative and Nice Trends) is an ITER graphical data visualization and exploration tool designed for plant engineers, operators, and physicists. Its requirements were gathered through interviews with various stakeholders, and its architecture was planned for a long-term project such as ITER. As such, a modular design and clear definition of generic interfaces (abstraction layer) were crucial, providing a robust foundation for future adaptations to new plotting, processing, and GUI libraries. The MINT application relies on an independent plotting library, which acts as a wrapper for the choice of underlying graphical libraries. Data selection and retrieval were also developed as a separate module, with a well-defined data object interface for easy integration of additional data sources. The processing layer is also a separate module, supporting algebraic and user-defined functions.

        Speaker: Dr RODRIGO CASTRO (CIEMAT)
      • 09:40
        Metadata framework for distributed real-time control systems 30m

        Modern real-time plasma control systems will be modular and distributed. This provides several advantages: component isolation, component simplicity and robustness, scalability, and the possibility of utilizing heterogenous execution environments. In addition these systems are amenable to machine learned pipelines. However such systems incur the liabilities of system complexity and communication related delays and jitter.

        We have developed a framework for describing the components of distributed modular control systems. It specifies a schema for describing the interfaces between components, the components or function blocks, and the deployment of the resulting real-time actors on computers running a real-time framework. The solution is framework agnostic. It can generate artifacts to drive particular chosen real-time frameworks.

        Using the framework we describe the components of the control system for a demonstration device, generate artifacts for the SCDDS real-time system developed for TCV, implement controllers using these artifacts, and finally deploy them to operate the device. This meta data framework is applicable to modular simulation environments.

        Speaker: Joshua Stillerman (MIT Plasma Science and Fusion Center)
    • 10:10 10:30
      Coffee Break 20m
    • 10:30 12:00
      DB/1 Information retrieval, dimensionality reduction and visualisation in fusion databases
      Conveners: Min Xu (Southwestern Institute of Physics), Joshua Stillerman (MIT Plasma Science and Fusion Center)
      • 10:30
        End-to-end intra-pulse data analysis at ITER: first steps from magnetics to live display 30m

        Interpreting diagnostic data as early as possible after a plasma pulse is an important capability of modern tokamaks [1, 2, 3]. This is particularly critical for ITER, since a quick feedback on the plasma performance (shape, confinement, power balance, impurities, ELMs, …) during the pulse is increases efficiency of operation and furthers the implementation of the scientific program that steers its exploitation.

        In this work, we present a first implementation of a demonstrator for an intra-pulse processing workflow for ITER, from magnetic measurement data to the live display of equilibrium reconstruction.

        Initially, a set of magnetic measurements are artificially created. This requires the use of synthetic poloidal field diagnostic signals from different simulations based on ITER scenarios, together with the corresponding plasma current and the machine description of different components that affect the pulse (like passive structures, wall, and toroidal field coils). We use a Bayesian inference process that adds uncertainties and interpolates the data, ensuring a more realist frequency of the signals. An important aspect of this synthetic diagnostic is the introduction of a frequency-dependent noise (lower power at high frequencies) which closely mimics the usual hardware noise.

        This data is written to self-described netCDF file(s). This data will be used as input information to the real-time processes as implemented by the magnetics plant systems. To save network bandwidth, the data is encoded. Then the data is streamed for archiving and stored as HDF5 files. This part is executed in the Plant Operation Zone network (POZ), with the aim to simulate a complete signal acquisition chain of the magnetics diagnostic. From here, they are handled as real plant signals, being transferred to the external plant network (XPOZ), down sampled and used as the initial data for a short intra-pulse analysis workflow. Here, an equilibrium reconstruction is calculated, which is then displayed in the temporary control room Live Display.

        We give an analysis of performance, live down sampling, and robustness of the system, with emphasis on extrapolation for real live data. We also perform a validation of the process by comparing the calculated plasma current and equilibrium reconstruction with the synthetic signals used as the input for this process.
        With this demonstrator correctly validated, we expect to include more complex analysis workflows in order to further develop a full validated intra-pulse processing infrastructure.

        [1] D. P. Schissel, et al., Fusion Science and Technology, 58:3, 720-726.
        [2] D. Dodt, at al., Fusion Eng. Des. 88 (2013) 79–84.
        [3] M. Emoto, et al., Fusion Eng. Des., 89 (2014), p. 758.

        Speaker: Paulo Abreu (ITER Organization)
      • 11:00
        The role of structured multi-purpose databases in fusion research 30m

        Most databases in fusion research are devoted to a single topic, such as energy confinement, H-modes, profiles or disruptions. In order to allow for a wide range of analysis, modelling and validation tasks, a broad-based multi-purpose database, JETPEAK, has been developed for JET. This database currently includes 23000 stationary state (∂/∂t≈0) manually selected samples, averaged time windows of typically 0.1-1s duration. The database includes near 1000 scalar, 1D (profiles) and 2D (R & Z dependent) variables grouped into topical structures including equilibrium variables, electron and ion kinetic data from various sources, heating system data, data from visible spectroscopy and neutron diagnostics, as well as from various analysis codes used at JET, in particular the Monte Carlo heating code ASCOT. The list of variables is open and new variables have been added over time in order to satisfy new analysis requirements. JETPEAK is used for purposes as varied as comparisons of theoretical predictions with experimental data, modelling and prediction of DD, TT and DT neutron rates, energy, momentum and particle confinement scaling, long term monitoring, data consistency testing, validation, code benchmarking and code development. Two novel methods for neutron tomography which have been developed using JETPEAK will be presented together with examples of the other applications mentioned. This broad-based approach has since been exported to the TCV and ST40 tokamaks, leading to the creation of databases similar to JETPEAK. The TCVDTB database has 65000 samples reaching back to initial TCV operation the nineties. The older samples were obtained using an automatic program for identifying stationary discharge phases based on a set of 16 stationarity criteria. Software for combining ‘JETPEAK-like’ databases from different devices into a multi-machine database has been developed and may be used in future developments of international databases.

        Speaker: Dr Henri Weisen (Karazin National University)
      • 11:30
        Design Concept of Intelligent Integrated Control System for Neutral Beam Injection 30m

        Due to the specificity of Neutral Beam Injection (NBI) system, the control of its actual physical system is achieved through the Integrated Control System (ICS). The NBI ICS is used to coordinate the operation of various subsystems of NBI while ensuring steady-state operation of entire system and the safety of experimenters. The early NBI ICS in our lab was designed based on a centralized control structure. However, due to the poor robustness of centralized control structures, once the central computer fails, the whole control system will be in a paralyzed state, therefore, the current NBI ICS adopts a distributed design structure to balance the system load, which is also the mainstream system design and architecture model of ICS around the world. But from a practical point of view, the distributed system architecture is not perfect either. Most of the existing distributed frameworks are highly dependent on the underlying logic, and the implementation of additional functions usually requires an overall structured evaluation, which can lead to serious system problems if not properly decoupled. Therefore, exploring a new way to improve the current ICS to intelligence is of great significance in today's networked and intelligent era. Currently, Internet of Things (IoT), as an important part of the new generation information technology, can interconnect real substrates with the Internet and realize the control of everything through data exchange. There is no doubt that intelligence must be the development direction of future fusion, and under the current development status and shackles faced by ICS, combining Artificial Intelligence (AI), IoT and ICS with each other may be a good breakthrough point. This is because on the one hand, IoT has wide compatibility and powerful scenario-based capabilities, it not only has the advantages and features of distributed design, but also can pull the NBI subsystems into the same level scenario, laying the foundation for further construction of digital NBI; on the other hand, the intervention of AI makes IoT have some new typical features such as intelligent sensing, ubiquitous connectivity, precise control, digital modeling, real-time analysis and iterative optimization, which is enough to pull the current NBI ICS into a new intelligent control era. Finally, it is worth mentioning that due to its inherent design structure and functional characteristics, ICS tends to be broadly generic, so it is not used exclusively for NBI operations in nuclear fusion, and it can provide a degree of insight into other areas of application.

        Speaker: Prof. Chundong Hu (ASIPP)
    • 12:00 13:30
      Lunch Break 1h 30m
    • 13:30 14:45
      TIV/1 Analysis of time series, images and video: detection, identification and prediction: Session 5
      Conveners: Prof. Jesús Vega (CIEMAT), Andrea Murari (Consorzio RFX)
      • 13:30
        Spectroscopic Analysis for impurities and Plasma parameters in Metallic Spherical Tokamak (MT-I) 30m

        Metallic Spherical Tokamak (MT-I) is modified form of the GLAST-II (GLAss Spherical Tokamak) operational in PTPRI at Pakistan. It has major radius 15 cm and minor radius 9 cm with aspect ratio 1.67. It is equipped with optical, electrical and magnetic diagnostics. Impurities present in the vacuum vessel disturb the performance of the device and hinder the achievement of the higher values of plasma parameters such as electron number density, electron temperature, plasma stability, and global energy confinement due to plasma cooling by radiative losses. The temporal concentration of the nitrogen impurity present in the Ar and He discharge during the wall conditioning of the MT-I tokamak is determined through emission spectrum. Optical actinometric technique exploits the change in emission intensity of the selected Ar/He lines at constant partial pressure to normalize the electron energy distribution function. The changing plasma conditions provided that both transitions have close excitation thresholds and similar dependence of excitation cross-sections. The selected line intensity of the nitrogen can be related to the ground state concentration of the nitrogen molecules and ions involving in optical emission. The electron temperature for both discharges has been determined separately from the Boltzmann plot method. For measurement of electron number density empirical formulas have been derived from the isolated spectral lines of Ar–I (750.38 nm) and He–I (587.56 nm & 667.81 nm). Stark broadening of well-isolated argon Ar–I (750.38 nm) and helium He–I (587.56 nm & 667.81 nm) lines have been used after de-convolution of other broadening contributions. A newly designed optical diagnostics consists of three channel photodiodes coupled with the extreme narrow band filters have been developed to obtain the temporal profile of Hα and Hβ spectral lines during the hydrogen discharge in the MT-I Tokamak. The line ratio method is used to calculate the temporal profile of the electron temperature considering the emission intensities of the Hα and Hβ spectral lines with excitation thresholds energies.

        Speaker: Dr FARAH DEEBA (Pakistan Tokamak Plasma Research Institute)
      • 14:00
        Analysis of intermittent data time series from the far scrape-off layer in Alcator C-Mod at high Greenwald fractions 30m

        To be filled out in the upcoming days.

        Speaker: Sajidah Ahmed (UiT The Arctic University of Norway)
      • 14:30
        A comparative study of event detection methods in fusion devices with an application to edge-localized modes 15m

        Event detection will play an increasingly important role for operating fusion devices in a safe and efficient way. In the plasma, various events require detection and identification, such as the onset of magnetohydrodynamic instabilities, appearance of disruption precursors, impurity events, confinement mode transitions, etc. However, in a future fusion power plant, a wide variety of events and faults occurring in general plant systems outside the actual tokamak (e.g. electrical and cooling systems) will also require good, automated detection strategies. To that end, increasingly complex data are being exploited, like time series, images and video, that are obtained from sensors monitoring not only the plasma, but also other plant systems. Once detected and identified, strategies toward prediction, prevention and mitigation need to be deployed in a subsequent stage. In analyzing such data for event detection, the stochasticity of events and their signatures often poses considerable challenges to automated detection techniques. Hence, increasingly sophisticated methods based on probabilistic reasoning and machine learning are needed to detect events, or to estimate the risk of a future occurrence. In this contribution, we present a comparative study of the performance of existing and new event detection methods, applied to the detection of individual edge-localized modes (ELMs) in tokamaks. On the one hand, reliable detection of ELM events is a prerequisite for investigating their properties from a statistical point of view. This is for instance important for risk assessment in the presence of rare, but large ELMs, which can pose a threat to wall components. On the other hand, ELM properties exhibit considerable stochasticity, e.g. in their relative timing and accompanying plasma energy drop. In that sense, they pose a sufficiently challenging case to the detection methods that we have considered in our study. In particular, recent experiments at JET under optimized fueling conditions have led to an operational regime showing great variability of ELM behavior, with small and larger ELMs occurring in irregular time intervals. We compare several state-of-the-art event detection methods with existing techniques. These include robust thresholding methods, time series classifiers using support vector machines, one-dimensional neural networks and object detection methods exploiting feature invariance. Various metrics quantifying the performance of the methods are compared using a dataset of manually labeled ELMs from JET. We propose a number of recommendations towards event detection for future application.

        Speaker: Jerome Alhage (Ghent University)
    • 14:45 15:30
      Coffee Break 45m
    • 15:30 17:15
      IDA/2 Integrated data analysis and synthetic diagnostics: Session 6
      Conveners: Rainer Fischer, Geert Verdoolaege (Ghent University)
      • 15:30
        Simulation-based inference with optical diagnostics 30m

        Inferring physics parameters from experimental data is a key analysis need for physicists. Often it is desired for this inference process to have grounding in detailed models contained in simulation. Simulation-based inference (SBI) are techniques which utilize simulations in the forward model, create approximate posteriors which are faithful to the simulation. Recently neural networks have been leveraged in SBI to flexibly represent the underlying Bayesian inference process. The approximate posteriors generated can be sampled from quickly, which is attractive for fast between shot-analysis. Here we show ongoing work in applying SBI to experimental fusion diagnostics, focusing on optical diagnostics. We create a synthetic diagnostic for the Lyman-alpha diagnostic (LLAMA) at the DIII-D tokamak, using the CHERAB code for spectroscopic diagnostics, and the KN1D neutral transport code for relatively fast neutral density transport. By generating may thousands of samples of synthetic plasma input profiles, and obtaining output of the forward model, we can then leverage SBI for creating an approximate posterior. Here we use neural networks representing normalizing flow for accurate replication of the data distribution. This creates a neural network which can be sampled from quickly to create a posterior of the neutral density given the measured LLAMA line-integrated radiance.

        Speaker: Randy Churchill (Princeton Plasma Physics Laboratory)
      • 16:00
        Integrated Data Analysis augmented by kinetic modeling 30m

        The Integrated Data Analysis (IDA) approach employs a combination of various diagnostics within a Bayesian probability framework to determine electron density and temperature profiles of ASDEX Upgrade plasmas. These profiles frequently serve as a benchmark for validating transport simulations. However, as some areas of the plasma are not covered by the diagnostics or measurements may be unavailable, IDA relies on non-physics-based priors to mitigate missing or uncertain data. Consequently, the resulting profiles may not align with theoretical expectations and may have steep gradients leading to unphysical high turbulent transport. To improve the estimated profiles and not to be contradictory to transport expectations, additional physical prior information from transport modelling augments the measured data. Simulated profiles and their gradients together with their uncertainties constrain the physically reasonable parameter space. Special emphasis is given to the estimation of the uncertainty of the simulation were methods are explored such as input error propagation and comparison to the high-fidelity turbulence solver GENE.

        Speaker: Michael Bergmann (Max Planck Institute for Plasma Physics)
      • 16:30
        On the use of Synthetic Diagnostics as Persistent Actors in Integrated Modelling workflows 30m

        Modelling the diagnostic signals for specific conditions of plasma operation is essential for an optimal and comprehensive analysis of the discharge behaviour and the preparation of the tools to design, optimize and validate scenarios on existing or future fusion devices.
        This contribution gives a brief overview of the diagnostic models available in the Integrated Modelling and Analysis Suite IMAS [1] to model the ITER instrumentation systems. Some use cases will be described in which synthetic diagnostics are applied to perform physics data analysis and develop plasma modelling tools.
        A brief description of the use of diagnostic models in workflows doing Bayesian Inference analysis will be presented where the concept of persistent actor framework will be introduced. An emphasis will be made on using synthetic diagnostics to help the development of the ITER Plasma Control System (PCS) [2] and its Simulation Platform (PCSSP) [3] through the design of its support functions and the application of its control algorithms inside co-simulations combining IMAS models with Matlab/Simulink controllers. This type of co-simulation is made possible via the use of the Muscle3 coupling library within the so-called Persistent Actor Framework [4]. This framework facilitates the communication between various actors (models) in an integrated simulation across languages and domains. A closed-loop prototype will be presented where the plasma density measurement is simulated by an interferometer model that provides the signals through the real time data network (represented by the real_time_data Interface Data Structure or IDS) to a Matlab/Simulink controller. This controller in return sends a command, still through the real_time_data IDS, to a gas puff model that adjusts the gas injection. As such, an external source of particles is injected into a transport model, which evolves the plasma density accordingly.
        Methods to extend this prototype to more sophisticated plasma simulators will be discussed, since the persistent actor framework can be used for co-simulations between PCSSP controllers and high fidelity or pulse design simulators, e.g. in the context of free-boundary control simulations for the validation and verification of models, workflows and controllers.

        [1] F. Imbeaux, Nucl. Fusion 55 (2015) 123006
        [2] J.A. Snipes, et al., Nucl. Fusion 61 (2021) 106036
        [3] M. Walker, et al., Fus. Eng. Des. 96 (2014) 716
        [4] L.E. Veen, A.G. Hoekstra, “Easing Multiscale Model Design and Coupling with MUSCLE 3”, Comp. Science – ICCS 2020, 12142, pp 425-438, Springer, Cham.

        The views and opinions expressed herein do not necessarily reflect those of the ITER Organisation.

        Speaker: Mireille SCHNEIDER (ITER Organization)
      • 17:00
        A Bayesian approach for estimating the kinematic viscosity model in reversed-field pinch fusion plasmas 15m

        A fundamental feature of reversed-field pinch fusion plasmas is helical self-organized states. In the past few decades, MHD theory and numerical simulations have played a key role in describing these states. An important parameter is the dimensionless Hartmann number [1], which is determined by the resistivity and the viscosity. It can be interpreted as the electromagnetic equivalent of the Reynolds number and it turns out to be the ruling parameter in the 3D nonlinear visco-resistive magnetohydrodynamics activity. However, there is no consensus regarding the theoretical model for the kinematic viscosity coefficient.
        There are five candidate models according to the various momentum transport theories developed for hot magnetized plasmas: three classical viscosities derived from the closure procedure leading to the Braginskii equations, the ion temperature gradient viscosity, describing a mode that damps the velocity fluctuations, and the Finn anomalous viscosity according to the Rechester-Rosenbluth model.
        We calculated the viscosities and the Hartmann number using measurements from RFX-mod. A power-law dependence was then sought between the Hartmann number and the amplitude of the m = 0, 1 secondary modes. Our approach, using Bayesian statistics, outperforms the previous analysis based on simple least squares fitting.
        First, by computing the Bayes factor [2], we inferred that a constant relative error is a better model for the uncertainty in the regression analysis. Second, errors on the plasma parameters and their role in error propagation were taken into consideration. Third, Bayes factors between the different viscosity models were used to infer the optimal viscosity model, in a more robust way compared to the earlier approach based on correlation coefficients and simulations.
        The optimal model, identified through the Bayesian procedure, agrees with physical motivation [3]. More generally, our work has demonstrated the potential of the Bayesian approach in other model selection problems in fusion, using a rigorous and robust statistical methodology.

        Acknowledgements:
        We thank Consorzio-RFX for providing the RFX-mod data.

        References:
        [1] Montgomery D 1992 Plasma Phys. Control 34 41157
        [2] Richard D M et al 2016 J. Math. Psychol. 72 6
        [3] Vivenzi N et al 2022 J. Phys.: Conf. Series 2397 012010

        Speaker: Mr Jeffrey De Rycke (Ghent University)
    • 09:00 10:10
      TIV/2 Analysis of time series, images and video: detection, identification and prediction: Session 7
      Conveners: Prof. Jesús Vega (CIEMAT), Andrea Murari (Consorzio RFX)
      • 09:00
        A Hybrid Physics/Data-Driven Approach to Disruption Prediction for Avoidance 40m

        Even if the understanding of the tokamak configuration has progressed significantly in the last years, these devices are all plagued by the collapses of the plasma called disruptions. Moreover, devices with metallic plasma-facing components, similar to those foreseen in the next generation of reactors, are also vulnerable in this respect, particularly when operated at q95 around 3. In these machines almost all disruptions are preceded by anomalies in the radiation patterns, which either cause or reveal the approaching collapse of the configuration. Given the influence of these radiation anomalies on the kinetic profiles and the magnetic instabilities, a series of innovative and specific elaborations of the various measurements, compatible with real-time deployment, is required. The data-driven indicators derived from these measurements can be interpreted in terms of physics-based models, which allow determining the sequence of macroscopic events leading to disruptions. The results of a systematic analysis of JET campaigns at high power in deuterium, full tritium, and D-T, for a total of almost 2000 discharges, are very encouraging and prove the potential of the approach. The computational and warning times are such that the control systems of future devices are expected to have more than sufficient notice to deploy effective prevention and avoidance measures.

        Speaker: Riccardo Rossi (Department of Industrial Engineering, University of Rome Tor Vergata)
      • 09:40
        Real-time disruption prediction in multi-dimensional spaces with privileged information not available at execution time 30m

        Focusing the attention on disruption mitigation, the locked mode (LM) signal is typically used as single signal to recognise incoming disruptions. However, if the LM signal is not available in real-time (as it will happen in the first JT-60SA discharges) or the LM signal is not reliable enough, simple predictors have to use other signals. This work shows that a line integrated density (LID) signal can be used to predict disruptions although its amplitude is not directly related to forthcoming disruptions (as in the case of the LM). Even more, the work shows that the prediction capability of the LID signal can be increased by using the LM as privileged information (V. Vapnik et al. Neural Networks 22 (2009) 544-557).
        JET data collected in C-wall discharges have been used for test purposes. In particular, 1439 discharges in the range 65988 – 73126 have been analysed (1354 non-disruptive and 85 disruptive shots). It is important to point out that only discharges with plasma current above 2 MA and disruptions whose plasma current at disruption time is greater than 1.5 MA have been taken into account.
        In this work, the prediction of disruptions starts with anomaly detection to recognise the first disruptive behaviour in the dataset of discharges. The two-dimensional space of consecutive amplitudes of a LID signal is used. The first disruption is identified after applying an anomaly criterion to 42 non-disruptive shots and without obtaining any false alarm in such non-disruptive discharges. Then, a Support Vector Machine (SVM) model and an RBF kernel is created with LID data of the first disruptive discharge. By applying this SVM model to the rest of the discharges, the success rate is 98.82% and the false alarm rate is 42.32%. Unfortunately, as mentioned, the information concerning the LM is not considered due to its unavailability in real-time. However, at training time is possible to use LM data as privileged information in order to improve the decision function to recognise disruptive behaviours. By considering the LID data together with the LM signal at training time, a new SVM model and RBF kernel is generated for the real-time classification of disruptive/non-disruptive plasma behaviours. It should be emphasised that the prediction is carried out with the only input of the LID signal and without any LM data at prediction time. In this case, after applying the model to all the dataset discharges, the success rate is 94.12% and the false alarm drops to 10.27%. It is important to note that the models can be retrained in an adaptive way after missed alarms or false alarms.
        To our knowledge, the use of privileged information in the terms described here is applied for the first time to disruption prediction in this work. Performances with privileged information (in this case, the LM) are better than performances with only the LID signal and these results open an important research line in fusion not only for disruption prediction but also for the development of any data-driven model based on machine learning.

        Speaker: Prof. Jesús Vega (CIEMAT)
    • 10:10 10:30
      Coffee Break 20m
    • 10:30 12:00
      TIV/2 Analysis of time series, images and video: detection, identification and prediction
      Conveners: Prof. Jesús Vega (CIEMAT), Andrea Murari (Consorzio RFX)
      • 10:30
        A novel method to find jumps in waveforms 30m

        Because of electromagnetic interference from the environment, vacuum chamber potential fluctuations or other various reasons, the plasma diagnostic signal waveform often jumps, bringing great trouble to the data analysis. There are already some methods to achieve jump detection, such as by detecting the ratio of change over RMS, or comparing short-time Fourier transform spectrogram. However, all these methods are not intelligent enough, and require several key parameters to be given manually.
        This poster proposes a jump detection method based on image recognition, which trains a neural network through a certain amount of labeled data, thus automatically finding jumps on the diagnostic signal waveform without parameters, with a fairly satisfactory level of accuracy.

        Speaker: Yi Tan (Tsinghua University)
      • 11:00
        Phase tracking with Hilbert transform and nonlinear mode-mode coupling analysis on HL-2A and Heliotron J 30m

        Recently in energetic-particle physics study, nonlinear mode-mode interaction has been noticed to play key roles in production of new modes. Bispectral analysis is the common way to identify the nonlinear interaction. However, a number of statistical ensembles are necessary for the bispectral analysis. In this presentation we propose to use Hilbert transform to detect the nonlinear mode-mode interaction. Hilbert transform could directly give the phase of a coherent mode. If the phase difference between two coherent modes is randomly changing, the two modes are independent of each other and there are no nonlinear interaction between them. If the phase difference between two coherent modes keeps constant, there are nonlinear interaction between them.
        Two examples are given for the detection of nonlinear mode-mode interaction using Hilbert transform. One example is, production of new low-frequency sidebands through the nonlinear interaction between a beam-driven low-frequency mode (LFM)【1】 and a very low frequency mode (VLF), as shown in Fig.1. Another example is, production of new TAE sidebands through the nonlinear interaction between TAE (toroidal Alfvén eigenmode) and TM (tearing mode) on HL-2A tokamak 【2,3】.
        Fig. 1 Phase difference of two sidebands of LFM (Band1-Band2) and
        the phase of VLF are roughly synchronized

        References
        【1】 L.G. Zang et al, Nucl. Fusion 59 056001 (2019).
        【2】 P.W. Shi et al, Nucl. Fusion 59 086001 (2019).
        【3】 L.G. Zang et al, Nucl. Fusion 61 026024 (2021).

        Speaker: Linge Zang (Southwestern Institute of Physics)
      • 11:30
        Predictive Maintenance in Fusion Devices With an Application to the Ohmic Heating Circuit at JET 30m

        Fusion power plants will need to run in a reliable way, in order to maximize the power output and avoid delays due to unscheduled maintenance or damage to components. Predictive maintenance is an approach that can contribute to this requirement by periodic or continuous monitoring of the condition of equipment. The goal is to predict when the equipment will require maintenance and, ultimately, to provide an estimate of the remaining useful lifetime of devices and components. This allows better maintenance scheduling and can help avoiding damage due to equipment failure. In this work, we introduce a number of statistical techniques that can be employed toward predictive maintenance in fusion devices. We then present an application to circuit breakers in the JET Ohmic heating circuit, powering the central solenoid. These circuit breakers are an important cause of failed pulses at JET, because they endure physical fatigue due to the large currents required for plasma formation. Using voltage and current data in the circuit, we employ on-line change point detection algorithms to determine if the circuit breakers operate in a healthy or anomalous state. In general, this approach can provide an advance warning of the deteriorating condition of subsystems and components in fusion devices, allowing maintenance as needed and prevent asset damage.

        Speaker: Leonardo Caputo (Ghent University)
    • 12:00 13:30
      Lunch Break 1h 30m
    • 13:30 15:10
      ADV/1 Advances in data science, probabilistic methods and machine learning: Session 8
      Conveners: Michael Churchill, Geert Verdoolaege (Ghent University)
      • 13:30
        Multi-Scale Recurrent Transformer model for Predicting KSTAR PF Super Conducting Coil Temperature 40m

        Superconducting magnets play a critical role in a superconducting-based nuclear fusion device. As the temperature of superconducting magnets increases with a change in current, it is important to predict their temperature to prevent excessive temperature rise of coils and operate them efficiently. We present Multi-Scale Recurrent Transformer(MSR-Transformer) system, a deep learning model for forecasting the temperature of superconducting coils. Our system recurrently predicts future temperature data of the superconducting coil using previous data obtained from a multi-scale KSTAR PF coil dataset and latent data calculated from previous time step. We apply a multi-scale temperature subsampling approach in our model to learn both the details and the overall structure of the temperature data effectively. We demonstrate the effectiveness of our model through experiments and comparisons with existing models.

        Speaker: Dr Kwon giil (Korea institute of Fusion Energy)
      • 14:10
        Data Analysis of Quasi-Two-Dimensional Nonlinear Interactions in Avalanchelike Phenomena in HL-2A Plasmas 30m

        Extensive studies on regulations of the plasma profile by flucating modes may shed light on the plasma control techniques for ameliorating impurity content and plasma performance [1]. In this report, we present the data processing of radially distributed BES measurements related to the two-dimension mapping of the avalanche structure and its related impurity analysis in the HL-2A neutral beam heated H-mode plasmas[2].
        To gain deeper understanding of the generation of avalanche, the cross-correlation function (CCF) analysis has been performed to the radially distributed BES channels illuminating a radially elongated structure. In addition, we have investigated possible nonlinear interactions among various turbulence components. It is demonstrated that the avalanche gets energy from and modulates ambient turbulence via nonlinear interaction from the bi-spectrum analysis of the density and magnetic fluctuation data. The significant coupling around f2 ~150 kHz indicates that the nonlinear three-wave interaction is responsible for the avalanche generation.
        Furthermore, the impurity behavior during avalanche is further investigated by numerically simulating the impurity transport process. The impurity tranpsort is calculated using STRAHL code, and fitted to experimental meausrements, including the derived C6+ profile calculated based on the multichannel CXRS data using CHEAP code and the time evolution of FeXVI measured by VUV spectrometer. The impurity data analysis suggests that the avalanch provide a transport channel for avoidance of heavy impurity accumulation.
        References
        [1] Doyle, E. J. et al. Chapter 2: Plasma confinement and transport. Nucl. Fusion 47, S18 (2007)
        [2] Sun T. F., Liu Yi et al Nucl. Fusion 61, 036020 (2021)

        Speaker: yi liu
      • 14:40
        Confinement scaling with machine size in the updated ITPA global H-Mode confinement database 30m

        Empirical scaling of the thermal energy confinement time $\tau_{E_{th}}$ in tokamak H-mode plasmas, determined from multi-machine databases, remains a convenient tool for studying the dependencies of $\tau_{E_{th}}$ and for predicting confinement based on experimental data. Based on regression analysis, the approach is essentially data-driven, but this does not prevent the incorporation of physics information to constrain the parameters of the regression model or to guide model improvements beyond the simple power law. Recently, the multi-machine ITPA global H-mode confinement database was updated with additional data reflecting the ITER operational conditions, as well as measurements from devices with fully metallic walls. This has led to the new ITPA20 scalings, updating the IPB98(y,2) law that is often used as a standard for energy confinement scaling in ELMy H-mode plasmas. This has revealed several dependencies that are different to those in the ’98 scaling, as well as considerable uncertainties on some of the parameters when taking into account model uncertainty. One of the notably different dependencies lies in a considerably weaker scaling with the device's major radius, with an exponent αR reduced from quadratic to almost linear. The present work is aimed at revealing the cause of this reduced size scaling. Using optimization and clustering techniques, a subset of the database has been identified as exhibiting very weak size scaling ($\alpha_R$ = 0.377), hence maximally contributing to the reduced size dependence seen in the overall data set. This subset is localized in dimensionless space, governed by normalized gyroradius, collisionality and pressure, as confirmed by random forest classification. Interestingly, in this space, the operational point of future devices like ITER is situated in a region of higher size dependence ($\alpha_R$ = 1.647). This may at least partly account for a significantly higher ITER confinement time prediction by the scaling with elevated $\alpha_R$ ($\tau_{E_{th}}$ = 2.95s) compared to that obtained from the anomalously low $\alpha_R$ regression ($\tau_{E_{th}}$ = 1.58s). Like ITER, the SPARC experiment also shows up in the region with higher size dependence, although closer to the cluster boundary.

        Speaker: Joseph Hall
    • 15:10 15:30
      Coffee Break 20m
    • 15:30 17:30
      DB/2 Information retrieval, dimensionality reduction and visualisation in fusion databases: Session 9
      Conveners: Joshua Stillerman (MIT Plasma Science and Fusion Center), Min Xu (Southwestern Institute of Physics)
      • 15:30
        Optimizing tokamak operations using Machine learning methods as a service 30m

        Tokamak operations and Fusion plasma research datasets are vast and complex, presenting various challenges in data analysis and interpretation.
        The development of cutting-edge tools based on Artificial Intelligence (AI), and machine learning (ML) algorithms can significantly accelerate fusion plasma research outcomes and optimize the tokamak operations.
        We discuss the potential implementations, challenges, and limitations of our ongoing development of a Local Artificial Intelligence-as-a-service (L-AI-aaS) platform.
        Our platform aims to streamline data integration from diverse sources generated during WEST tokamak experiments to extract inherent information. Leveraging advanced AI algorithms for in-depth analysis and extracting valuable insights from experimental data rapidly. Such AI trained models and services empower researchers with increased visibility and help in making experiment decisions in operational fusion plasma.
        The L-AI-aaS leverages generative AI power and ML techniques for providing services that allows optimizing data preprocessing, and helps in discovering unseen insights and discovering structure and reporting anomalies. Such automation provides a comprehensive analysis of experimental data and saves researchers’ time and allows higher-level research tasks.
        As a scalable solution, building L-AI-aaS has the potential to become a valuable tool for fusion research projects, driving further innovation and breakthroughs in the field.

        Speaker: Mrs Feda ALMUHISEN (CEA)
      • 16:00
        IMAS simulation management and remote data access for ITER 30m

        While the Integrated Modelling & Analysis Suite (IMAS [1]) is being developed further, the number of simulations available in IMAS is increasing both at ITER and within the Members. As a result, tools are needed to manage, curate and expose the large IMAS simulation databases to the potential users. Ad-hoc solutions have been developed in some cases, such as the ITER scenario database (2500+ simulations) and its associated Python scripts that register some meta-data (stored in a yaml file associated with each simulation) and then browse or query this recorded information to identify the simulations of interest. This solution, while simple to set up, is not generic (scripts are tightly bound to ITER while IMAS allows for simulation on any machine), is not meant to scale to a very large number of simulation counts (queries need to go through all yaml files) and lacks important features.

        In order to address this need for a general purpose tool for managing simulations in IMAS, a simulation data management tool (SimDB), has been developed. Using SimDB simulations are ingested along with meta-data that records the input, output and information about the code that captures the simulation provenance and ensures reproducibility of the simulation. Each simulation is given a globally unique identifier (UUID), and can be pushed to one or more remote archives where the data can be validated and made available to other users. The ingested simulations to be queried via a command-line interface or via a web frontend using a flexible query syntax. A SimDB catalogue has been set-up at ITER with meta-data from the Dataset Description and Summary IDSs being made available and queryable, making the SimDB meta-data directly interoperable with other IMAS-based catalogues.

        In addition to SimDB, the IMAS access-layer has been extended to allow for remote access of IMAS data, using the simulation URIs returned from the SimDB queries. Based on the UDA client-server solution, this will allow for secured authentication of users and for full or partial access to the IDS data as well as controllable batching of requests to improve the performance depending on the capabilities of the network. The public SimDB servers, the SimDB unique identifier and the remote access URI provides a method to unambiguously refer to IMAS data in publications, improving the FAIRness of IMAS.

        This poster will present the new ITER simulation management and remote access solutions and detail how they will facilitate FAIR data access to the ITER simulation catalogue.

        [1] F. Imbeaux, Nucl. Fusion 55 (2015) 123006

        Speaker: Jonathan Hollocombe (UKAEA)
      • 16:30
        IODA: a new federated web platform for collaboration and sharing of data analysis resources in Fusion Data Research 30m

        The analysis of data from fusion devices is a common and important task in Fusion Data Research (FDR). One which poses several practical challenges to scientists.
        First, experimental programs generate an enormous amount of data which is hosted by dedicated institutions. Accessing this raw data requires authorization and either a very fast direct connection or enormous local storage capabilities. Second, the analysis of this data involves applying many standard data analysis and visualization libraries, dedicated domain-specific routines, and programming new, experimental code. Scientists constantly need to remain fluent in a great deal of applicable software routines, libraries, and platforms, which are often under development, sometimes not-yet-well-documented, and perhaps implemented in different programming languages. Finally, the complexity of the computations and the size of the data may imply long processing times unless high computational power is used. This may include parallel computation, highly specialized software, or dedicated hardware (such as GPU or FPGA) … the installation and use of which are far from simple.
        These three peculiarities make FDR work laborious, difficult to communicate, and more difficult to reproduce. The FDR community needs to set a common platform for efficient work in the discipline. One that, ideally, establishes an open, federated way for scientist to share data, validated analysis software, and computing equipment, while respecting scientists’ freedom to choose from whatever exists or add newly developed tools. Such a platform would not only facilitate FDR daily work, but would also make communication, reproducibility, and replicability of results easier.
        This paper presents IODA (acronym for Input-Output Data Analysis), a new client-server Web platform that aims to provide a viable solution for the cited problems. IODA clients run on any Web-enabled device (PCs, laptops, and tablets) allowing scientists to interactively design a directed graph representing a given access and analysis of remote, distributed data, and the visualisation of the results. The client can then send the graph to the main server for execution, which a cloud of federated computing servers cooperatively run, returning the result of the computations back to the main server for the client to analyse.
        The joint, transparent to the user, work of this ecosystem effectively provide scientists with i) simplified, secure access to distributed data, ii) verified software routines for analysis and visualization, iii) access to network-available specific computing hardware, and iv) the capability to introduce the user’s own code in the analysis.
        We will show the user interface of the platform’s client, describe its server-side architecture (the real heart of the federated data analysis capabilities), and list current and future platform components that help address the cited FDR community needs.

        Speaker: Francisco Esquembre (Universidad de Murcia)
      • 17:00
        The Design of NBI Experimental Data Processing System 30m

        As a necessary auxiliary heating method, Neutral Beam Injection (NBI) heating technology has high heating efficiency and clear physical mechanism to meet various needs of fusion experiments, so it will definitely become an indispensable key technology in future fusion research. NBI requires parameter tuning and optimization before it can be formally put into operation, so its operational data is of critical importance to experimenters. Undoubtedly, better control, data processing and feedback methods will be of great benefit to the feedback of experimental data and the analysis of experimental results. Inspired by the COntrol, Data Access and Communication (CODAC) system, this paper proposes the design and implementation of a CODAC-based NBI experimental data processing system based on the actual operational requirements. For system design, a three-tier distributed system architecture model of "task processing", "storage processing" and "interaction processing" is constructed based on the Experimental Physical and Industrial Control System (EPICS). In terms of data exchange, a data processing scheme between memories is proposed based on the double buffering algorithm and MMAP technology. In addition, considering the scalability and compatibility of system, a device model is proposed based on EPICS to unify the device abstraction processing format and standardize the subsequent device development. In terms of transmission, a data transmission structure under high-speed sampling is proposed based on TCP/IP protocol. Meanwhile, considering the possible failure state of the system and the storage limitation of EPICS, this system adopts the service model of hot standby dual computer so that it can provide data storage and guarantee services while also providing a platform for system status monitoring and remote services. In terms of system operation, a reasonable concurrency handling mechanism is proposed considering the concurrency exceptions that may be brought by both local and remote operation modes. After testing on NBI testbed, the system showed a significant improvement over the old method in terms of data processing. When the buffer size is set to 1M, the data processing efficiency under the NBI testbed is the highest, which is about 5 times of the previous data processing rate. This system will provide more real-time data processing for NBI experiments and will be used to cope with more complex pulse experiments in the future.

        Speaker: Dr Yu Gu (University of Science and Technology of China)
    • 19:00 21:00
      Gala Dinner
    • 09:00 10:10
      UNC/1 Uncertainty propagation, verification and validation in modelling codes and data fusion: Session 10
      Conveners: Simon Pinches (ITER Organization), Keisuke Fujii (Kyoto university), Masayuki YOKOYAMA (National Institute for Fusion Science)
      • 09:00
        Sensitivity-based Uncertainty Quantification for plasma edge codes: status and challenges 40m

        Plasma edge codes like SOLPS-ITER [1] are currently the main tools for interpreting exhaust scenarios in existing experiments, and for designing next step fusion reactors like ITER and DEMO. These codes typically couple a multi-fluid model for the plasma with a kinetic model for the neutral particles. While the former is implemented in a (deterministic) Finite Volume setting, the latter is solved with (stochastic) Monte Carlo (MC) methods, making simulations computationally expensive. Moreover, several sources of uncertainty are present throughout the complex simulation chain: starting from the magnetic equilibrium reconstruction, which forms the basis for the plasma mesh; going over unresolved physical phenomena requiring closure terms and related parameters, such as anomalous transport coefficients; and finally, a plethora of uncertain input parameters, such as atomic physics rates or boundary conditions. Therefore, uncertainty quantification (UQ) for model validation with plasma edge codes appears a challenging task, which is presently precluded by the high computational costs.

        In this contribution, we show how adjoint sensitivity analysis enables UQ for plasma edge codes, discussing the main achievements so far, and the remaining issues and challenges. The adjoint sensitivity analysis is based on Algorithmic Differentiation (AD), which provides floating-point accurate sensitivities for complex codes in a semi-automatic way. Such sensitivities are then fed to gradient-based optimization methods, which are employed to solve the backward UQ problem, also known as parameter estimation or model calibration. Casting this estimation into a Bayesian Maximum A Posterior (MAP) setting, we consistently account for information and uncertainties from different diagnostics [2]. The main limitation of this framework is that currently only an approximate fluid neutral model can be employed. The more accurate MC model requires dealing with statistical noise in the sensitivity computation. We show that this can be accommodated in a discrete adjoint setting [3] and report on first results employing AD. Finally, we show how a combination of finite differences and adjoint sensitivities, also known as an in-parts adjoint technique, allows sensitivity propagation throughout the whole simulation chain, including magnetic equilibrium reconstruction [4].

        [1] S. Wiesen et al. (2015), J. Nucl. Mater. 463, 480-484.
        [2] S. Carli et al. (2021), Contrib. Plasma Phys. 62 (5-6), e202100184.
        [3] W. Dekeyser et al. (2018), Contrib. Plasma Phys. 58, 643-651.
        [4] M. Blommaert et al. (2017), Nuclear Materials and Energy 12, 1049-1054.

        Speaker: Stefano Carli (KU Leuven)
      • 09:40
        Adoption and Validation of IMAS Data 30m

        The suite of tools being developed to support the preparations for ITER operation, including data interpretation and analysis, and the refinement of the ITER Research Plan, are underpinned by a common data representation that forms the basis around which the Integrated Modelling & Analysis Suite (IMAS) is built and which strives to make fusion data more FAIR.

        Adopting a common standard for the representation of data allows tools to be interoperable and for them to be tested and validated on present day experimental data, with the aim of accelerating the transition from initial testing to production-ready applications that would otherwise have to wait for the start of ITER operations and the production of ITER data. The Data Model itself is described by a Data Dictionary that follows a well-defined life-cycle and evolves in response to community needs, with most changes arising from its application to new Use Cases while improving data reusability.

        In this presentation the mapping of experimental data to IMAS Interface Data Structures (IDSs), both dynamic and static (so-called Machine Description metadata), as well as their accessibility, will be discussed as a prerequisite for the validation of tools both directly against experimental data and also in comparison with existing tools used on today’s research facilities.

        Effort has already started on many devices to begin to map their experimental data into IDSs including ASDEX-Upgrade, DIII-D, EAST, JET, KSTAR, MAST-U, and TCV, whilst on WEST their plasma reconstruction chain [1] is now wholly based upon the IMAS data representation.

        In addition to the validation of software tools and workflows, the populated data structures can also be validated using a recently developed tool that forms part of the IDStools package. This uses extensible rules to validate against generic physics and data constraints, as well as Use Case-specific rules, e.g. for a particular device such as ITER or when developing a specific database for specific events such as disruptions.

        [1] L. Fluery et al., WEST plasma reconstruction chain and IMAS related tools, SOFT 2020, Croatia

        Speaker: Simon Pinches (ITER Organization)
    • 10:10 10:30
      Coffee Break 20m
    • 10:30 12:00
      TIV/3 Analysis of time series, images and video: detection, identification and prediction: Session 11
      Conveners: Prof. Jesús Vega (CIEMAT), Andrea Murari (Consorzio RFX)
      • 10:30
        Real-time implementation of intelligent data processing applications: gamma/neutron discrimination and hot spot identification 30m

        This contribution presents the methodology used for implementing two intelligent data processing applications using real-time heterogeneous platforms.
        • The first application involves discriminating between gamma and neutron pulses acquired using a scintillator through the use of deep learning techniques implemented with 1-D convolutional neural networks (CNNs) and high-sampling rate analog to digital converters (ADCs). The selected architecture was implemented using the IntelFPGA OpenCL SDK environment and evaluated for performance and resource utilization.
        • The second application uses the Connected Components Labeling algorithm to detect hot-spot in an image acquired with a high-speed camera. Heterogeneous computing techniques based on the OpenCL standard were applied to achieve real-time performance on a Micro Telecommunications Computer Architecture (MTCA) platform.
        Both applications use the CPU, GPU, and FPGA computation capabilities to process the acquired data and have been integrated with the nominal device support (NDS) model developed by ITER. The results are comparable to the state-of-the-art solutions while employing a much-reduced development cycle with high-level programming languages such as C/C++, the specialized algorithms can be evaluated in a short development cycle. The proposed solutions balance the computational load between a field-programmable gate array (FPGA) and a graphical processing unit (GPU), and the algorithm is optimized to take advantage of the specific characteristics of each platform. The neutron discrimination algorithm achieved up to 79k Events of real-time discrimination, while the hot spot detection algorithm achieved up to 3000 frames processed per second. Using the OpenCL programming framework for these developments has been beneficial because the same algorithm can be evaluated in different hardware platforms; thus, the developer may select the platform that best fits the best performance.

        Speaker: Prof. Mariano Ruiz (Universidad Politecnica de Madrid)
      • 11:00
        Detection of Thermal Events Using Machine Learning for the Feedback Control of Thermal Loads in Wendelstein 7-X 30m

        Wendelstein 7-X (W7-X) is the most advanced drift-optimized stellarator, designed to demonstrate the feasibility of the stellarator concept for a future fusion power plant with steady-state operation. Its primary goal is to prove quasi-steady-state operation with plasmas of up to 30 minutes in a reactor-relevant parameter regime. Achieving high-performance operation in W7-X and other fusion devices, such as ITER, necessitates an effective feedback control system for thermal loads, to prevent unnecessary plasma interruptions and ensure long-plasma operation. The feedback control system requires a high-level understanding of the thermal events, their type, cause, and risk, which is best achieved through advanced computer vision and machine learning techniques.

        The development of an effective thermal load feedback control system for W7-X is slowed by challenges related to data generation and annotation in fusion. The complex and dynamic nature of thermal events, such as strike-lines, leading edges, hot spots, fast particle losses, surface layers, and reflections, makes it difficult to generate accurate and representative data for training machine learning models. Additionally, manual annotation of these events is a time-consuming and labor-intensive process, further complicating the development of a reliable and efficient dataset.

        We propose an iterative strategy for thermal event detection using machine learning techniques. Our approach begins with the Max-tree algorithm, employed for semi-automatically annotating a small dataset, which facilitates hierarchical segmentation of thermal events while preserving the inclusion relationships among them. We then proceed to weakly supervised training of models for panoptic segmentation, utilizing the Mask R-CNN architecture with data from W7-X and WEST. Ultimately, our aim is to fine-tune large foundational models for segmentation and classification and implement transfer learning with synthetic data for accurate zero-shot thermal event detection in new devices, such as ITER. This approach ensures protection from day one, paving the way for the successful operation of future fusion power plants.

        Speaker: Aleix Puig Sitjes (Max-Planck-Institut für Plasmaphysik)
      • 11:30
        TIME SERIES BASED INDICATORS FOR FUSION PLASMA DISRUPTIONS DETECTION 30m

        A series of methods, based on the time series analysis of the main plasma diagnostic signals, are used to determine when significant changes in the plasma dynamics of the tokamak configuration occur, indicating the onset of drifts towards a disruption. Dynamical indicators, such as embedding dimension, 0-1 chaos test, recurrence plots measures, but also informational criteria, such as information impulse function quantifying information without entropy, have been tested to detect the time intervals when the plasma dynamics drifts towards situations that are likely to lead to disruptions. The methods allow a good estimation of the intervals, in which the anomalous behaviors manifest themselves, which is very useful for building significantly more appropriate training sets for various kinds of disruption predictors. As they are based on completely different mathematical principles, they are providing robust information about these intervals. Some of the developed methods may also be implemented themselves as stand-alone predictors for real time deployment.

        Acknowledgements: This work has been carried out within the framework of the EUROfusion Consortium, funded by the European Union via the Euratom Research and Training Programme (Grant Agreement No 101052200 - EUROfusion). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Commission. Neither the European Union nor the European Commission can be held responsible for them.
        One of the authors (T.C.) acknowledges also the support of the Romanian National Core Program LAPLAS VII – no. 30N/2023.

        Speaker: Teddy CRACIUNESCU (National Institute for Lasers, Plasma and Radiation Physics INFLPR, Magurele, Romania)
    • 12:00 13:30
      Lunch Break 1h 30m
    • 13:30 15:40
      INV/1 Inverse problems: Session 12
      Conveners: Geert Verdoolaege (Ghent University), Didier Mazon (CEA Cadarache), Andrea Murari (Consorzio RFX)
      • 13:30
        AN INTRODUCTION TO INVERSE PROBLEMS IN FUSION 30m

        In science an inverse problem can be defined in full generality as the task of calculating from a set of observations the factors that generated them. Such problems are called inverse because they are meant to derive the causes from their effects. They can therefore be considered the opposite of forward problems, whose objective is calculating the effects of causes. Many data-centric problems in fusion are ‘inverse’ in nature, i.e. they involve extracting unknown parameters and causes from observations. First many fundamental measurements, being based on the plasma natural emission, require some form of inversion to be interpreted and provide the required physical information: the measurements to obtain the magnetic topology, tomographies, videos and nuclear detectors are just some examples. A second class of activities, often performed by plasma scientists, consists of relating physical quantities to the observations forming experimental databases. Linear and nonlinear fitting, to identify scaling laws for example, are cases in point. These families of tasks are often addressed separately but in reality, being instances of inverse problems, they have a lot in common. Indeed, both activities require solving mathematically ill posed inversions and therefore face the same types of issues such as: estimating the confidence intervals in the results, dealing with the consequences of noise, minimising bias effects. Some approaches to address these difficulties in both the measurement and the modelling settings will be discussed.

        Speaker: Andrea Murari (Consorzio RFX)
      • 14:00
        Validating and speeding up X-ray tomographic inversions in tokamak plasmas 40m

        In tokamak plasmas, estimating the local impurity concentration can be subject to many uncertainties. In particular, it requires accurate knowledge of plasma temperature, magnetic equilibrium, impurity cooling factor and the spectral response of the diagnostics used. When all other plasma parameters are well-known, the impurity density profile can be reconstructed in the core with the help of X-ray tomography. In this contribution, we introduce some tools aiming at validating and speeding up the X-ray tomographic inversions. The traditional approach based on Tikhonov regularization, including magnetic equilibrium constraint and parameter optimization, is presented. The advantages and drawbacks of substituting it with neural networks for fast inversions are investigated. Finally, the perspectives for plasma profiles reconstruction and validation are discussed.

        Acknowledgements. This work has been partially funded by the National Science Centre, Poland (NCN) grant HARMONIA 10 no. 2018/30/M/ST2/00799. We gratefully acknowledge Poland’s high-performance computing infrastructure PLGrid (HPC Centers: ACK Cyfronet AGH) for providing computer facilities and support within computational grant no. PLG/2022/015994. This work has been published in the framework of the international project co-financed by the Polish Ministry of Education and Science, as program "PMW". This work has been carried out within the framework of the EUROfusion Consortium, funded by the European Union via the Euratom Research and Training Programme (Grant Agreement No 101052200 - EUROfusion). Views and opinions expressed are however those of the authors only and do not necessarily reflect those of the European Union or the European Commission. Neither the European Union nor the European Commission can be held responsible for them.

        Speaker: Axel Jardin (Institute of Nuclear Physics Polish Academy of Sciences (IFJ PAN))
      • 14:40
        ADVANCED TOMOGRAPHY BASED ON THE MAXIMUM LIKELIHOOD PRINCIPLE FOR INTERSHOT ASSESSMENT OF THE RADIATION LOSSES 30m

        On the Joint European Torus (JET) first and more recently on ASDEX-Upgrade (AUG), an Expectation Maximization algorithm has been implemented to derive the Maximum Likelihood (ML) between the line integrated measurements of the bolometers, and the reconstructed tomograms representing specific poloidal emissive distributions. On both devices, the Line of Sights (LOSs) coverage of the foil bolometers have been used to evaluate bolometric tomographies. The main and most distinctive feature of the method is the possibility to estimate the variance related to the reconstructed tomogram and, consequently, to evaluate the uncertainties on the derived quantities. Since the first implementation on JET, dedicated studies have been performed to improve the outputs of the ML tomographies and consequently, to increase the reliability, quality and accuracy of the derived quantities. The algorithm developed can handle missing or unreliable LOSs due to faults that might occur during an experimental campaign, as well as systematic errors and outliers in the measurements. More recently two upgrades have been developed and implemented to: a) minimize the risk of producing artefacts, an unavoidable and an unwanted feature that can strongly influence heat transport and turbulence studies; b) to handle the asymmetric brightness on LOSs, due strong gas puffings close to one of the bolometer arrays. The developed algorithm is therefore probably one of the most complete and advanced available nowadays in the fusion community. Having proved the portability between devices, efforts have been spent, and are also currently on going, to develop a real-time version compatible with the ITER fast controller platform. Such efforts succeeded at reducing by a factor ten the time interval required for estimating a reconstruction, paving the way at least for an intershot application of the ML code in future versions.

        Speaker: Emmanuele Peluso (University of Rome Tor Vergata)
      • 15:10
        Fast tomography for the control of the emitted radiation in tokamaks. 30m

        Accurate measurement and control of the radiation emitted by tokamak plasmas is crucial for the successful operation of fusion reactors. Many macroscopic plasma instabilities, which can rapidly yield to lost of plasma confinement, are related to radiation patterns differing in localisation, shape and intensity. Current tokamaks use bolometers to measure the plasma emission, but they only provide line-integrated values and require an inversion technique for obtaining local information. Tomography inversion is a commonly used approach for high spatial resolution reconstructions, but it is slow and unsuitable for real-time applications. In this work, a fast inversion technique, that provides a low spatial but high time resolution is presented. The reliability of the method is demonstrated by analysing numerical generated patterns and the accuracy is evaluated for different shapes and positions of the emitted regions. Further validation of the method is offered by comparison with a well established tomography reconstruction on different discharges of JET with the ITER Like Wall. Finally, an analysis of the main radiation patterns is performed with this developed method in order to understand the mechanism, which can lead to the radition collapse of the configuration. The results suggest that the fast inversion technique is a promising tool for real-time radiation monitoring and control in fusion reactors.

        Speaker: Ivan Wyss (Università degli studi di Roma Tor Vergata)
    • 15:40 16:00
      Coffee Break 20m
    • 16:00 17:30
      S/1 Round-table Discussion
      Conveners: Min Xu (Southwestern Institute of Physics), Didier Mazon (CEA Cadarache)
    • 17:30 18:00
      Closing Session