- Indico style
- Indico style - inline minutes
- Indico style - numbered
- Indico style - numbered + minutes
- Indico Weeks View
Since 18 of December 2019 conferences.iaea.org uses Nucleus credentials. Visit our help pages for information on how to Register and Sign-in using Nucleus.
KEY DEADLINES
15 April 2024 Deadline for submission of abstracts through IAEA-INDICO for regular contributions
30 April 2024 Deadline for submission of Participation Form (Form A), and Grant Application Form (Form C) (if applicable) through the official channels
30 April 2024 Notification of acceptance of abstracts and of assigned awards
Objectives
The event aims to provide a forum to discuss new developments in the areas of plasma control systems, data management including data acquisition and analysis, and remote experiments in fusion research.
Target Audience
The event aims to bring together junior and senior scientific fusion project leaders, plasma physicists, including theoreticians and experimentalists, and experts in the field of plasma control systems, data management including data acquisition and analysis, and remote experiments in fusion research.
The Princeton Plasma Physics Lab (PPPL) has demonstrated the effectiveness of a scalable real-time framework that enables plasma control system (PCS) algorithms to utilize heterogeneous inputs at both millisecond and microsecond speeds. Current developments target real-time plasma control algorithm challenges by combining new hardware technologies and proven software techniques with new methods of machine learning inference. The deployed framework integrates various hardware components, including remote I/O, wire-speed latency between computers, high speed analog digitizers, an nVidia A100 and V100 GPU, an FPGA, and a camera. Primary input consists of 160 analog signals acquired at 1MHz in buffers of 32 microseconds. Alongside providing that data directly to the PCS, the FPGA also consumes data for future processing through machine learning inference engines tunable at runtime. Timing analysis shows 1 millisecond end to end inference times and the potential for 50 microsecond control loops on the raw data. Even without the FPGA, 3 machine learning models have been implemented and are capable of running in 560 microsecond cycle times. Key findings indicate the feasibility of real-time plasma control using a hardware-accelerated approach, demonstrating significant improvements in processing speed and efficiency compared to strictly software-based methods. This research provides new tools to enable increasingly complex plasma control systems, paving the way for enhanced stability and performance.
The Plasma Control System (PCS) serves as the central control system responsible for operating the tokamak and controlling the plasma therein.
A new framework of PCS for high-performance steady-state operation is designed.
It is built based on component modularity, providing the operational environment for components, data management model and interfaces to external systems.
A meticulously designed data model can enhance system performance, efficiently utilize data resources, and contribute to achieving plasma control objectives.
The data management model of new PCS is referred to as the Data Engine (DE), which provides standardized organization and communication of data. Its architecture consists of the application layer, logical layer, and physical layer.
The DE application layer consists of the specific application logic and business logic implemented by developers. The application layer utilizes the interfaces and functionalities provided by the DE logical layer to fulfill specific Function Block(FB) requirements.
The logical layer of DE provides a key-value collection (hash map) for storing and managing metadata. Metadata offers a description of the data, encompassing its address, type, and parameters. In DE, generic programming is applied to handle metadata, allowing for the handling of data with unspecified types without requiring specific implementations for each type.
In the logical layer, all data is divided into individual subsets using the key values. Data within the same subset is allocated in a contiguous block of memory called a "Block", which improves the efficiency of data transfer.
By default, Blocks are deployed in heap memory but can also be deployed in other forms, such as shared memory. Different mapping methods of Block is provides depending on the type of memory.
At the physical layer, communication between various components is accomplished through shared memory. Expanding on this, a configurable service is provided to synchronize shared memory Blocks across multiple computer nodes, enabling inter-component data communication across nodes.
The memory for data is reused across different operational cycles to enable the system to effectively handle the large amounts of data generated during long-pulse operation and provide reliable and uninterrupted services.
In addition, a real-time service for data archiving is provided to complement this functionality.
DE enables transparent access to data, allowing developers to access all system data in a unified manner. This further facilitates easy system scalability, enhancing system performance.
The new PCS powered by DE was successfully applied in the 2023 EAST summer operation campaign, completing a total of 286 shots without any system failures. Additionally, more than 24 hours continuous running has been tested, demonstrating that DE can effectively serve steady-state operation control.
For present tokamaks and future fusion reactors, the control of plasma initiation, shaping, heating, stabilization, and safe termination of discharges is required. In order to integrate various control functions and meet the requirements of safe and steady-state operation of the device, the design and implementation of the plasma control system (PCS) infrastructure have been completed.
A dual redundant cluster structure is adopted for system scalability and reliability concern. Non-real-time applications and real-time control processes are deployed separately on host and real-time nodes. Master and slave clusters, configured with same hardware and operating parameters, can run synchronously and be switched within a control cycle when one of them works abnormally. Besides, a heartbeat network is designed among PCS and external input/output systems, such as distributed real-time data acquisition systems and actuator command receiver subsystems. Any abnormal heartbeat will trigger the event handling of PCS to ensure the system reliability.
A component-based distributed real-time control framework is designed as the software architecture. The core framework provides communication, XML configuration, data management, etc. While various application components can be expanded like Lego blocks building to realize plasma operation functions, such as user login, parameter configuration, task deployment, execution workflow and health management, etc. By adopting data encryption, user permission hierarchical authentication, and parameter legality check, the new PCS ensures the security of input data. During operation, the health component can collect system hardware and software operation status for the local interlocking, while the plasma transient status is also detected and handled in real-time by the off-normal event handling component to ensure the safety and reliability of the system. The real-time performance of the new PCS infrastructure is guaranteed by developing high-speed real-time data distribution service based on real-time network and shared memory technology, and utilizing multi process parallel technology to achieve multi task distributed deployment across CPUs. The statistical results indicated that the fastest control cycle was less than 50 microseconds. To support steady-state operation, the real-time parameter parsing, state machine real-time operation scheduling, and data segmental storage technology is adopted in the new PCS. And over 50 hours continuously running has been verified using historical experimental data.
The integration of the main control algorithms of EAST in the infrastructure was achieved using a visualization software development platform PCS-SDP. And then, the correctness of control and the reliability of the system were fully validated by utilizing historical data simulation, model-based simulation through the plasma control simulation verification platform PCSVP, and experimental piggyback running. Finally, the prototype system was successfully put into operation in EAST experiments in August 2023. In nearly 300 discharge shots, the system ran without any failures and achieved stable plasma current, shape and density control. The plasma current control error is less than 1%, and the shape control accuracy reaches millimeter level.
The plasma facing materials(PFCs), especially the divertor targets, are facing the continuous excessive heat strike issue, which will be more severe in the future fusion reactors like ITER. Generating the stable detached plasma is an acknowledged way to solve this problem and has been applied in the different tokamaks. On EAST, the ITER-like tungsten divertor and long-pulse discharge(over 1000s) can play an essential role to develop the detachment feedback control method which is suitable for ITER. Boronisation of the first wall gives more choices for the impurity species, the nitrogen(N$_{2}$) was firstly applied on the detachment feedback control on EAST. Using N$_{2}$ seeding, a feedback control duration of 70s of the divertor electron temperature(T$_{e,div}$) was achieved. During the control phase, the T$_{e,div}$ near the strike point of the lower outer divertor was lowered to 5eV stably, the divertor heat load was mitigated effectively. The increment of the radiated power is about 200kW, most of which was contributed from the divertor region. The N$_{2}$ seeding has almost no influences on the ling-averaged core density and the loop voltage, so the recycling effect from N$_{2}$ in one shot was not strong in such discharge conditions. Besides, the integrated detachment feedback control was also achieved by N$_{2}$: the localized radiated power(P$_{rad}$) around X-point was in the feedback control phase, while T$_{e,div}$ monitor estimated the divertor detached situation synchronized. Once the divertor plasma arrived at the detached phase, the target value of the X-point radiated power feedback control was lowered automatically, to reduce the impurity seeding volume but still keeping the detachment. Under the different plasma current(I$_{p}$) stage of 450, 500, 550kA, this integrated control was applied successfully. The radiation around X-point was increased about 15-20kW/m$^{2}$, the Te,div near lower outer strike point was lowered to about 5eV, and the core confinement has no degradation during the feedback control phase, the robustness of the detachment feedback control was improved effectively.
JET real-time plasma control has been delivered with a heterogeneous collection of control systems linked by a dedicated low-jitter, low-latency network. To provide a high degree of flexibility in tuning plasma control algorithms to experimental requirements, the Real-Time Central Controller (RTCC) has been available since 1997. RTCC provides a sandboxed execution environment where experimental algorithms can be deployed with a rapid development workflow. New control laws can be developed by operators during the course of an experimental session. The potential impact of a defect in algorithms evolved without full lifecycle quality assurance can be bounded by clipping feedback control requests at the actuator managers. The likelihood of such defects is reduced in the first place by constraining the algorithms to be composed from reusable blocks and trusted real-time signals. Although this system operated successfully for a long time, limitations in compute capacity of the legacy hardware on which the application was deployed constrained algorithm development.
Motivated by the need to provide physics operators with a more performant system, an upgrade project was carried out to port the RTCC application to a modern high performance PC platform. The architecture selected was to use the MARTe2 framework. Development was able to reuse existing MARTe2 data sources to connect the application to the JET environment using the ITER SDN protocol. RTCC blocks were converted to MARTe2 functions. Python tooling was created to automatically convert previously deployed RTCC algorithms to MARTe2 configuration form.
This paper describes the techniques used to demonstrate system correctness prior to deployment in the JET operating environment. This was particularly important given that it was deployed around the time of the DT campaigns. It explains how the system was used to demonstrate some novel control methods which delivered useful experiments in the final JET campaigns. It also outlines how the JET legacy data combined with this MARTe2 application can offer future value, even in the absence of the JET machine itself.
The vertical displacement instability is an inherent characteristic of tokamaks using elongated configurations. The uncontrolled growth of this instability will lead to plasma disruption, resulting in discharge termination and damage to the device. Therefore, it is necessary to control this instability. Due to the shielding effect of the vacuum vessel on external coils, full superconducting tokamaks typically use in-vessel coils to produce a horizontal magnetic field for controlling vertical displacement. The control requirements for vertical displacement are strongly related to the plasma current, elongation, and the passive structure of the tokamak. The CFETR tokamak aims for a configuration with a high elongation and plasma current, leading to high vertical displacement control system requirements. In the preliminary design phase [1], analyses were conducted based on the rigid model, and the position of the passive structure and in-vessel coils was determined. Based on the further simplification of the response model [2], a method to estimate the control requirements has been proposed in this work. It provides an estimate of the minimum power required to control a given vertical displacement and its corresponding voltage and current values. Based on this method, combined with optimal control algorithms and ITER-like speed control algorithms, the CFETR vertical displacement controller has been optimized. For the same control of 10% minor radius vertical displacement, the power requirement of the controller, requiring the highest power of 598 MW, has been reduced to 164 MW. Optimized designs of the controller significantly reduce the control requirement, allowing for robust control at lower power levels, which can effectively lower the overall cost of the device.
References:
[1] Li B , Liu L , Guo Y ,et al. Preliminary assessment of vertical instability with blanket in CFETR. Fusion Engineering and Design, 2019, 148:111295.
[2] Humphreys, D.A. et al. Experimental vertical stability studies for ITER performance and design guidance. Nuclear Fusion, 2009, 49 115003.
An update is being carried out on the TCABR tokamak (small tokamak, with R0 = 0.62 m and a = 0.2 m) in operation at the Physics Institute of the University of São Paulo, Brazil. This upgrade consists of the installation of (i) graphite tiles to fully cover the inner surface of the vacuum vessel wall, (ii) new poloidal field (PF) coils to allow the generation of diverse plasma configurations, (iii) HFS in the vessel and non-symmetrical control coils for ELM suppression studies and (iv) a coaxial helical injection system to improve plasma initiation. Among other objectives, this update will enable studies of the impact of RMP fields on advanced offset configurations such as x-point target and snowflake diversion. To create the various plasma scenarios foreseen for the new TCABR, the implementation of a robust and flexible plasma control system, improvements to the data acquisition and analysis system and the implementation of supervisors to monitor the various systems involved are underway in the operation of the tokamak (EPICS - Multithreaded Application Real-Time executor, MARTe - Experimental Physics and Industrial Control System), improvements in the MDSplus system (Model Driven System Plus) already widely used in the TCABR tokamak and implementation of a Web system (using PHP, JavaScript, python) for generating experiment configurations and improving interaction between operators/scientists and the experiment. In this work we will present the details of this ongoing implementation.
In this paper, an application of real-time control with integrated learning-based models is presented. In development, the MARTe2/MDSplus framework was used for rapid prototyping of control system components, including the training of learning-based models. MARTe2, a networked, real-time control framework, and MDSplus, a data management framework, are widely used in fusion experiments to increase efficiency in research. The two frameworks have been interfaced to provide flexibility in component modularization and robust data management of experiment results for real-time control systems. To demonstrate deep learning system development within the MARTe2/MDSplus framework, a vision-based observer was trained from and implemented in a real-time control system for a levitated magnet. The levitated magnet control system was designed to be analogous to plasma control systems and includes key characteristics such as distributed modular deployment, multi-timescale operation, scalability, and usability. The development process of this simplified model can be effectively abstracted thereafter to be applied to more complex problems. The controller was initially designed through linear optimal control methods. The training data for the vision-based observer was then acquired from test results, and the model was developed in parallel with controller improvement. The performance of the vision-based observer show that the proposed framework provides a robust and efficient pipeline for training deep learning models. Furthermore, the observer was implemented in real-time to analyze the system requirements for inference speed and accuracy imposed by the MARTe2/MDSplus framework. The application of this framework can be naturally extended to apply to different learning-based methods such as data-driven control, system dynamics modelling, etc.
In the past years, a great number of system identification experiments have been performed to study dynamic responses of the plasma because of deuterium puffing, impurity puffing, and heating modulations particularly focusing on the exhaust but also the core density. In this presentation, I will explain how and why we choose for certain dynamic data acquisition methods giving both simulation and some experimental examples+. This will make also clear why dynamic measurements are crucial for time-dependent modelling of the exhaust and an essential ingredient for the control the exhaust.
*See author list of S. Coda et al. 2019 Nucl. Fusion 59 112023
+Derks et al. Development of real-time density feedback control on MAST-U in L-mode”, FED, 2024
Research at the DIII-D National Fusion Facility in San Diego focuses on short pulse plasma discharges that specialize on various shaping profiles. High speed data collection is a critical component for the operation of many of DIII-D’s diagnostics. This is fundamental for capturing high resolution data used in experimental data analysis. Differing techniques enable the plasma control system (PCS) to perform complex real-time feedback control on microsecond time scales. This work presents a comprehensive overview of data acquisition, focusing on the hardware and software used in reliable data acquisition at DIII-D. The robust nature of the data acquisition system allows for various techniques to coexist seamlessly. However, as modern systems capable of nanosecond resolution become more common, existing architectures will need to be modified. By addressing the key challenges of high speed data acquisition, DIII-D is able to provide real time data used in plasma operation and has the ability to acquire high fidelity data needed for future experimental fusion reactors, such as ITER.
The traditional approach to building MDSplus Device drivers is rigid and lacks the ability to meet changing needs. In this paper, we introduce a novel paradigm for Device driver development that allows the tree structure to dynamically change.
This allows device drivers that can reconfigure to automatically reflect the hardware it represents, or a device that implements a variable number of queries to an external database. We have created a driver using this paradigm that communicates with a digitizer, queries the modules attached, and builds a tree structure to utilize them.
Additionally, this driver can reconfigure to match changes in the digitizer, by adding or deleting nodes using overwrite and/or delete modes. We also wrote a method for verifying both the setting provided and that the hardware matches the last known state. We have added fields to help validate settings such as min/max limits, and a list of allowed values. The definitions of the nodes which make of the device have been augmented to include help, tool tips and validation ranges. This will facilitate automated user interface generation.
We foresee a variaty of possibilities and applications that the MDSplus community will discover.
In order to better support development efforts on MDSplus, we needed to improve the Continuous-Integration/Continuous-Deployment (CI/CD) systems building MDSplus. To that end, we have created a new Jenkins server, utilizing the new scriptable Jenkinsfile method and modern security principles. Additionally, we have moved off of autotools, and on to CMake for our build system. Existing tooling and scripts were rewritten in modern Python as well.
This enabled us to revisit the existing test suite and find areas that needed improvement. For example, infrastructure for running IDL and MATLAB tests has been added. Tests can now run in parallel, allowing for faster local development and automatic testing for Pull Requests (PRs). New versions will now be tested against themselves, and against older versions of the client and server.
This will allow us to focus on better packaging and improved code coverage in the near future.
DTT, Divertor Tokamak Test facility, is currently under construction at the Frascati ENEA Research Center. Its aim is to explore alternative solutions for the extraction of the heat generated by the fusion process. Its Control and Data Acquisition System (CODAS) will (1) orchestrate and synchronize all the DTT systems during Plasma operation and maintenance; (2) acquire data from the experiment diagnostics and plant systems and store it in an experimental database to be used for on-line and off-line analysis; (3) provide real-time Plasma control.
The expected duration of the plasma discharge in DTT is in the order of some tents of seconds and therefore DTT can be considered a long lasting experiment, involving therefore data streaming technologies for data communication and storage. The main DTT CODAS design principles are based on three principles: (1) Taking inspiration from other similar experiments currently under development, namely ITER CODAC, (2) relying on proved solutions already adopted in running experiments with similar constraints and (3) taking advantage from practices widely adopted in the fusion and, more in general, in industry.
Several architectural concepts have been drawn from the ITER project, in particular the definition of a set of networks as the basis for plants integration, the adoption of IEEE1588 to provide overall time synchronization and the use of UDP as the network protocol for the communication among the control components in Plasma control.
In other aspects, the architecture of DTT CODAS differs from that of ITER CODAC, in particular the adoption of frameworks widely adopted in the fusion community, namely MDSplus for the data management and MARTe2 for the orchestration of the real-time plasma control components. MDSplus is in use since decades in many fusion experiments and, in particular, in EAST that is similar to DTT form the data acquisition point of view. For the supervision and orchestration of the components involved in real-time plasma control the MARTe framework has been adopted in DTT. MARTe is not the unique framework for Plasma Control in the fusion community, but it has been adopted in several different fusion experiments and a new release, MARTe2, has been developed by F4E under strict quality procedures.
Besides the adoption of software frameworks, other successful practices have ben drawn from the fusion community and from industry. In particular, the adoption of Simulink as ‘lingua franca’ for the definition of the algorithms involved in plasma control is becoming a standard approach in fusion experiments, relying also on automated tools that minimize or completely eliminate the need of manual translation from the Simulink definition into the actual C++ component implementation.
The most important contribution of industrial experience to DTT CODAS is the adoption of OPC-UA communication middleware for overall communication with plasm plant systems for slow control. Thanks to wide adoption of this standard in industrial communication, many solutions, including open-source ones, are available and at the same time the standard is well known by the industrial partners involved in the development of plant systems such as vacuum, cooling and power supplies.
Plasma shape control is a prerequisite for the stable operation of tokamak devices. Future fusion reactors anticipated to have longer burn length and higher discharge performance, current shape reconstruction methods based on magnetic diagnostics will encounter challenges related to signal drift and probe maintenance.
Hence, a plasma boundary shape reconstruction system utilizing visible spectrum images was introduced and deployed on the EAST tokamak. The optics and high-speed camera system installed in the J port of the EAST tokamak were employed to capture high-speed real-time images of plasma discharge. An algorithm for boundary extraction relying on grayscale features, a camera calibration algorithm employing feature point matching, and a coordinate mapping algorithm utilizing plasma geometric features were developed. These algorithms collectively enable plasma boundary shape reconstruction via visible spectrum diagnostics. Utilizing the visible spectrum diagnostic hardware system and software algorithm mentioned above, the plasma shape during the discharge process was reconstructed in real time, and the curve of the control point position changing with time was obtained. The reconstruction results were verified in EAST control experiments.
This work validates the feasibility of reconstructing plasma shape on an ITER-like tokamak device using visible spectrum diagnosis. This method can serve as a supplementary tool for magnetic measurements to enhance the accuracy of shape reconstruction or offer a potential alternative for plasma shape reconstruction.
References:
[1] Kocan, M., et al. "Steady state magnetic sensors for ITER and beyond: Development and final design." Review of Scientific Instruments 89.10 (2018).
[2] Hommen, G., et al. (2014). "Real-time optical plasma boundary reconstruction for plasma position control at the TCV Tokamak." Nuclear Fusion 54(7).
[3] Han, X., et al. (2023). "Development of multi-band and high-speed visible endoscope diagnostic on EAST with catadioptric optics." Plasma Science and Technology 25(5): 055602.
[4] Ming, C., et al. (2024). "Development of Dα band symmetrical visible optical diagnostic for boundary reconstruction on EAST tokamak. " Plasma Science and Technology, 26(2), 025104.
The JET Control and Data Acquisition System has stood the test of time and seen us through to the end of JET Operations in 2023. The system architecture has remained largely un-changed over the last decade or so although many new diagnostics and control systems have been added and the volume of data collected has grown massively. CAMAC remains at the heart of the system, particularly for continuous acquisition and control for much of the traditional, stable parts of the system. However most of the newer diagnostics and control systems are network connect network attached. A significant change was done to the CAMAC interface was done about a decade ago to remove the Serial Highway driver from the Sub-System host so that the Sub-System hosts could be virtualised and run on more powerful Oracle/Sun hosts, the hardware interface the Serial Highway driver running on legacy sub-system host network attached. Other significant changes have been the development of a standard, web based interface for control and data acquisition for diagnostics – the Black Box interface, the adoption of EPICS for several diagnostics and plant control along with integration into CODAS and the adoption of the ITER SDN protocols over ethernet to supplement the original ATM based real time control infrastructure. The SDN bridge created a natural basis from which to implement an upgrade to the plasma control system using MARTe2. Various improvements were done during the COVID-19 era to improve remote collaboration including the introduction of web access to the traditional mimics.
These developments were primarily driven by the enhanced requirements for the 2nd and 3rd Tritium campaigns on JET which included significant expansion of the neutron and gamma diagnostics along with expansion of the Tritium introduction systems and enhanced control systems. Towards the end of JET Operations the requirement for the Laser Induced Desorption (LIDS-QMS) required significant development work to incorporate the associated control and data acquisition systems together with pushing the mode of operation for JET Pulses. At the very end of JET Operations the requirements for long pulse operation also pushed the pulse operating mode. We now progress to Decommissioning and Repurposing JET and CODAS continues to be adapted to support the diminishing number of systems required to support the plant that is still operational and support diagnostic calibration later this year.
The superconducting stellarator W7-X underwent a major overhaul between 2018-2022 with the installation of an actively cooled divertor and inner wall. The CoDaC System also received a significant overhaul and expansion. The central safety system was completely re-implemented with the lessons learned from previous operation phases and the new requirements of OP2.1. The protection of the new divertor required substantial enhancement of the Fast Interlock System which necessitated a new hardware infrastructure and new implementation.
The real-time system had to be ported from the existing VXworks implementation to a real-time linux, in order to accommodate the new divertor protection system.
The central configuration system had to be reworked in order to increase modularity and scalability to accommodate the numerous newly added plant. In addition to work on the central components of CoDaC, around 15 completely new diagnostic systems were implemented and another 20 were significantly enhanced. This included a new MicroTCA-based camera acquisition framework with now enables all cameras at W7-X (which support CameraLink, CameraLink HS and GigEVision) to run on the same hardware platform with minimal adaption of the software.
As W7-X is geared towards steady-state operation, all data has to be streamed to the archive and cannot be stored locally and uploaded at a later time. The addition of the >24 high-speed cameras required a substantial upgrade of the network streaming capacity and the central storage systems. In order to reduce the requirements as much as possible, a real-time lossless compression algorithm has been implemented for camera data, which was adapted to the W7-X environment yielding a >60% compression rate.
This paper will provide an overview of the changes and upgrades done to the W7-X CoDaC system from 2018-2022 and show results from the OP 2.1 campaign and provide an outlook of the upcoming operation phase 2.2 which is set to commence in the second half of 2024.
This contribution gives an update on the progress of remote participation in the ITER conventional control system (CODAC) since the previous IAEA technical meeting in 2021. Six out of seven ITER partners have been connected 24x7, two of them using high-performance Open Systems Interconnection (OSI) level 2 virtual private networks. A new, high-bandwidth network connection node has been commissioned in Marseille (80 km from ITER). An audit of control system perimeter networks has been conducted, resulting in providing stricter network segregation. CODAC installation at remote centers has been streamlined with standardized CODAC deployment architecture, for both services and terminals. A hardware/software extension “kit” for CODAC terminals has been developed, which enables remote participation capabilities for plant operators. Third-party communication software such as Microsoft Teams has been adapted for use in control rooms. New software tools have been developed, including unidirectional “diode” software for EPICS (Experimental Physics and Industrial Control System) traffic, software for controlled file exchange inside the control system perimeter, as well as a high-performance video distribution service. CODAC client use has been demonstrated on wearables like tablets and augmented-reality smart glasses. Together with one of the ITER participants, a project has been initiated to allow integrating CODAC online data into virtual reality scenery, potentially eliminating the need for a classical computer terminal altogether.
The NSTX-U Shorted Turn Protection (STP) system is an essential safety feature of the NSTX-U tokamak, designed to safeguard its coils during experimental operations. With the upcoming upgrade of the NSTX-U facility, the implementation of the STP system becomes even more critical in ensuring the reliability and safety of research activities.
The development of the STP system was prompted by a non-detected short circuit incident on one of the upper divertor poloidal field coils during NSTX-U operations in 2016. This incident, attributed to manufacturing errors, underscored the importance of robust safety measures and proactive monitoring systems.
The STP algorithm is based on a Kalman filter that estimates the currents through the coils and their corresponding standard deviations. The state-space representation of NSTX-U that STP uses for the Kalman filter is based on a fixed plasma distribution response. STP uses as inputs the measurements of the coil voltages, currents and plasma current. The detection algorithm calculates the ratio of the current mismatches to the standard deviations (for each coil). If any of the ratios is greater than a certain limit, it means there is a significant statistical difference between the nominal model and the plant. This corresponds to a situation where one or more coils have changed their impedance or are shorted (between terminals or turns). If STP detects a fault or anomaly it responds rapidly by terminating the tokamak pulse. This real-time protection helps prevent potential damage to equipment and ensures the safety of the machine.
The STP algorithm was developed based on Matlab and Simulink; it benefits from advanced modeling techniques and automated code generation. By leveraging these tools, the development process is streamlined, thus reducing the time and effort required for implementation.
Since NSTX-U is currently not operating, it has only been possible to test the algorithm using the Autotester system; this system sends a pre-configured set of waveforms to the real-time input data stream, simulating healthy or shorted-turn shots. This testing process helps validate the performance and effectiveness of the STP software.
As the NSTX-U undergoes upgrades and enhancements, the STP system remains a cornerstone of fusion safety. By continuously monitoring and analyzing the behavior of the coils, STP contributes to the ongoing research on fusion operations.
In summary, the NSTX-U Shorted Turn Protection represents an advancement in safety, leveraging technology and sophisticated algorithms to mitigate risks and ensure the reliability of experimental operations.
The national project of experimental advanced superconducting tokamak (EAST) is an important part of the fusion development stratagem of China, which has fully superconducting tokamak with a non-circle cross-section of the vacuum vessel and the active cooling plasma-facing components. The safety and interlock system (SIS) is in charge of the supervision and control of all the EAST components involved in the protection of human and tokamak machine from potential accidents.
At present, the interlock systems of each plant system of the device are mostly implemented based on PLC, with a response time in the range of 10ms, and the interlock logic has been fixed. However, with the development of experimental requirements, new equipment and new plant systems with new interlock subsystems need to be added.
In the process of designing new interlock logic relationships and setting thresholds, parameters need to be frequently changed and logic relationships need to be frequently debugged, professional interlock control engineers are also required during the whole project.
The high-speed interlock logic testing controller designed in this article is a solution specifically designed for sub-millisecond high-speed acquisition and control, while requiring frequent changes in control logic project. It has 400MHz processor, 512MB storage and 256MB running memory, provides 16-channel AI, 4-channel AO, and 28-channel DIO. Suitable for medium-scale logic control (basic control logic quantity 50) applications. Users do not need programming skills, and they only need to use text language to edit logic in software such as spreadsheets. Statement such as: if AI0 > 2.3 or AI3 ≤0.4 then set AO1 = 2.34 set DO0 = 1, the controller will perform the corresponding logical functions.
Users can change the operation logic of signal I/O through text editing without programming, and realize customized logic operation and control output, which can greatly reduce the workload of system debugging and use.
Fusion experiments like all large physics experiments requires orchestrating a large set or various subsystems. To make things more difficult is that comparing to accelerators, fusion experiments are far from finished, meaning they get upgrades and modifications all the time. So flexibility and interoperability are the main concerns. EPICS is mature in accelerator community and have been the go to choice for many large experiments. But its advantage in fusion experiments is not that prominent. Flexibility and interoperability is not its first concern. Ins this paper we present a control system framework build with standard web technology. The key of web is HTTP and HTML which is interoperable among the widest range of devices. The goal is to improve the interoperability of the control system allowing different component in the control system to talk to each other effortlessly. Communication between machines are done with standard HTTP RESTful API, and the HMI is based on browser, HTML and Javascript. This enables it to be integrated into the already well developed ecosystem of web technology. Such as InfluxDB can be used as the archiver, NodeRed can be used as the scripter and docker can be used for quick deployment. In this work we also present its application in 2 fusion experiments J-TEXT and HFRC. It shows that a control system for fusion experiment can be developed fast and easy.
In the final year of JET’s operation new requirements were requested which were not possible with the current central control mechanism used in plasma control operations. The new requirements provided us with a justification to replace the system entirely to expand its operational capability and improve the user experience and processes. Given the nature of the system being replaced, it was necessary to successfully carry out thorough levels of testing to ensure that the replacement system behaved identically to the previous, whilst providing new functionality.
In order to achieve this, we used Gitlab's continuous integration practices in pipelines. The added advantage of this is that developers could have their work verified with each commit prior to merging new developments. Using pytest we were able to define both unit and system level testing comparing JET signal data input to the system and the actuator output, comparing this with complex pulses previously recorded in JET. Using a dockerized yocto environment on remote hardware we were able to perform performance testing in parallel to ensure correct signal data and sufficient timing. The outcome was a robust methodology of testing new code and maintaining confidence in the new system prior to delivery on JET.
We present our work to improve data accessibility and performance for data-intensive tasks within the fusion research community. Our primary goal is to develop services that facilitate efficient access for data-intensive applications while ensuring compliance with FAIR principles [1], as well as adoption of interoperable tools, methods and standards.
The major outcome of our work is the successful creation and deployment of a data service for the MAST (Mega Ampere Spherical Tokamak) experiment [2], leading to substantial enhancements in data discoverability, accessibility, and overall data retrieval performance, particularly in scenarios involving large-scale data access. Our work follows the principles of Analysis-Ready, Cloud Optimised (ARCO) data [3] by using cloud optimised data formats for fusion data.
Our system consists of a query-able metadata catalogue, complemented with an object storage system for publicly serving data from the MAST experiment. We will show how our solution integrates with the Pandata stack [4] to enable data analysis and processing at scales that would have previously been intractable, paving the way for data-intensive workflows running routinely with minimal pre-processing on the part of the researcher. By using a cloud-optimised file format such as zarr [5] we can enable interactive data analysis and visualisation while avoiding large data transfers. Our solution integrates with common python data analysis libraries for large, complex scientific data such as xarray [6] for complex data structures and dask [7] for parallel computation and lazily working with larger that memory datasets.
The incorporation of these technologies is vital for advancing simulation, design, and enabling emerging technologies like machine learning and foundation models, all of which rely on efficient access to extensive repositories of high-quality data. Relying on the FAIR guiding principles for data stewardship not only enhances data findability, accessibility, and reusability, but also fosters international cooperation on the interoperability of data and tools, driving fusion research into new realms and ensuring its relevance in an era characterised by advanced technologies in data science.
[1] Wilkinson, M., Dumontier, M., Aalbersberg, I. et al. The FAIR Guiding Principles for scientific data management and stewardship. Sci Data 3, 160018 (2016) https://doi.org/10.1038/sdata.2016.18
[2] M Cox, The Mega Amp Spherical Tokamak, Fusion Engineering and Design, Volume 46, Issues 2–4, 1999, Pages 397-404, ISSN 0920-3796, https://doi.org/10.1016/S0920-3796(99)00031-9
[3] Stern, Charles, et al. "Pangeo forge: crowdsourcing analysis-ready, cloud optimized data production." Frontiers in Climate 3 (2022): 782909.
[4] Bednar, James A., and Martin Durant. "The Pandata Scalable Open-Source Analysis Stack." (2023).
[5] Alistair Miles (2024) ‘zarr-developers/zarr-python: v2.17.1’. Zenodo. doi: 10.5281/zenodo.10790679
[6] Hoyer, S. & Hamman, J., (2017). xarray: N-D labeled Arrays and Datasets in Python. Journal of Open Research Software. 5(1), p.10. DOI: https://doi.org/10.5334/jors.148
[7] Rocklin, M. (2015). Dask: Parallel computation with blocked algorithms and task scheduling. In Proceedings of the 14th python in science conference
Measuring Soft X-Ray (SXR) emission in a tokamak gives access to plasma information such as impurity distribution, radiation emission, magnetic axis, etc. Current detectors used for SXR diagnostics, for instance semi-conductors, will not survive in the harsh environment of ITER caused by high neutron fluency. One solution is to use Gas Electron Multipliers (GEM) which measure X-ray emission spectrum from 1 keV to 20 keV [1] and are resilient to fast neutrons. The drawback is that this kind of detector produce a huge amount of data (100Go for a 300s pulse) that has to be stored and post-processed in a fast manner.
Storing a huge amount of data is not a big deal as soon as there is no backup needed but this requires data to be moved on a safe and temporary placeholder. Even though Networks are more and more quick, no network could work if every data unit produces in the meantime such an enormous amount of data. We will present in this paper the infrastructure developed at WEST to warrant data to be moved from detector to post-treatment area.
Considering the big amount of data, it is not always possible to compute and analyze the format in real time (even with a low frequency such as Hz), that is the reason why the infrastructure has to allow data to be analyzed in the meantime of acquiring without any interferences. This paper will explain the automatized process and architecture of GEM post-treatment as an example.
Finally, Acquisition Unit is usually a collaboration job between different Institutes or laboratories that cannot reach the intranet IRFM Network. The paper will describe the tools developed to support collaborating between WUT and IRFM-WEST.
MDSplus version 8.0 introduces 63-byte node names (in preparation for UTF-8 support). Which means that signals can now have meaningful names, including the long names used in the IMAS / OMAS dictionaries. This presentation describes the current state of the version 8.0 pre-release, possible enhancements, and how to obtain a copy of the software.
Some preliminary uses of version 8.0 will be described.
A consequence of the long node names is that the file format for version 8.0 cannot be read by prior versions of MDSplus. However, version 8.0 can read version 7.0 files. Planning is thus required to successfully deploy version 8.0, especially for customers with many MDSplus servers and clients. Advice regarding deployment will be presented.
Because version 8.0 is a major change, it is an opportunity to include other features that users have requested. Possible enhancements will be discussed. After the top priority features are added, version 8.0 will be officially released.
To facilitate the interoperability and reusability of analysis codes, and general interoperability of data from different fusion experiments, work has been undertaken to map the data from these experiments into the IMAS (ITER Integrated Modelling and Analysis Suite) data model[1]. This mapping can be done on an adhoc basis with the generated IDSs being then made available locally or via remote data access, or can be done on-the-fly by leveraging the power of the UDA[2] plugin mechanism to map the IDS data requests as they are received into the relevant experimental signal, performing any required data manipulations.
Previous efforts to map some EUROfusion experiments into IMAS were based on the EXP2ITM[3] XML mappings, and a custom plugin that was able to make use of these mapping files. Data was successfully mapped in this way but the technology stack was inefficient and brittle. With the development and release of IMAS Access Layer version 5 a new JSON mapping plugin[4] has been developed to codify the mappings for each experiment in a more maintainable and simpler format, and to provide common plugin code that can be used by most if not all experiments.
So far, the work to provide mapped data to IMAS has had a focus on EUROfusion experimental machines, driven by the EUROfusion DMP (Data Management Project), whose work is based on implementing the recommendations made by the FAIR4Fusion project[5]. For this project multiple experiments have generated IDSs or created mappings to allow for on-the-fly generation of Summary IDSs. These IDSs can then be used to populate a web-based catalogue to facilitate the exploration of the data from these machines in a standardised and comparable format.
To extend these mappings beyond Europe work has been undertaken to provide similar mappings for the MIT Alcator C-Mod experiment[6], focusing first on the Summary IDS, but with plans to extend into other IDSs to allow for running of analysis codes using the IMAS infrastructure. As part of this work, an MDSplus[7] data access plugin has been created for reading the experimental data, and JSON mapping files have been created to codify the mapping of IMAS paths into MDSplus expressions.
[1] IMAS, F. Imbeaux et al., 2015, Nucl. Fusion 55 123006
[2] UDA, J. Hollocombe et al., 2024, GitHub repository, https://github.com/ukaea/UDA
[3] Exp2ITM, J. Signoret, F. Imbeaux, 2010, https://www.eufus.eu/documentation/ITM/imports/edrg/public/md_and_dm/edrg_Basics_on_exp2ITM_v2.pdf
[4] JSON mapping plugin, A. Parker et al., 2024, GitHub repository, https://github.com/ukaea/JSON-mapping-plugin
[5] DMP, P. Strand et al., 2022, Plasma Phys. Control. Fusion 64 104001
[6] Allocator C-Mod, Hutchinson et al., Phys. Plasma 1(1994)1511, and Marmar, Fusion Sci. Technol
51(2007)261
[7] MDSplus, 2024, GitHub repository, https://github.com/MDSplus/mdsplus
The ITER Neutral Beam Test Facility (NBTF) serves as a crucial testing ground for the development and validation of neutral beam injection systems essential for ITER's fusion power plant. Two experimental campaigns, SPIDER and MITICA, are conducted within the NBTF. SPIDER (Source for Production of Ion of Deuterium Extracted from Rf Plasma) focuses on the development and optimization of the ion source, which is responsible for producing and accelerating the deuterium ions. It serves as a prototype for the ion source planned for use in ITER. MITICA (Megavolt ITER Injector and Concept Advancement) adds to the ion source technology by integrating high-energy beam acceleration. MITICA aims to demonstrate the full-scale neutral beam injection system that will be utilized in ITER for plasma heating, diagnostic and control. The SPIDER experimental campaign starting in April 2024 is a relevant step towards ensuring the successful operation of neutral beam injection systems in the future ITER fusion reactor, contributing to the advancement of fusion energy research.
Operation tools based on EPICS (Experimental Physics and Industrial Control System) and MDSplus play a crucial role in facilitating the operation and management of complex scientific facilities, particularly in fusion energy research. EPICS provides a robust framework for real-time monitoring and control of experimental parameters, ensuring precise and reliable operation of experimental devices. MDSplus, on the other hand, offers a comprehensive data management system, enabling efficient storage, retrieval, and analysis of experimental data.
Collaborative efforts at the NBTF, involving scientists from different institutions across Europe, India and Japan, emphasize the importance of data sharing and advanced computing infrastructures. Common computing platforms facilitate analysis of big datasets, aiding informed decision-making. Remote collaboration tools play a crucial role in fostering communication among global experts. With involvement from EUROfusion and ITER experts, strict collaboration accelerates ITER's neutral beam heating and diagnostic systems development. Moreover, remote participation, data visualization, and efficient operation tools are essential for enhancing accessibility and collaboration in scientific research. These tools enable researchers to remotely access and control experimental facilities, visualize data in real-time, and collaborate with colleagues worldwide, fostering interdisciplinary collaboration and accelerating the necessary scientific developments.
In this contribution we will present the architecture design and implementation of the operation tools based on EPICS and MDSplus, developed using Grafana, Python and nodeJS, to accomplish the remote participation, data visualization, efficient operation and collaboration tools, which are indispensable components of modern scientific research infrastructure, empowering scientists to conduct advanced fusion experiments
The rapidly evolving field of fusion research demands sophisticated data processing and management tools to handle the immense volumes of experimental data generated by tokamak operations. In response to these needs, TokSearch, a fusion data processing framework developed by General Atomics and recently open-sourced, has undergone significant upgrades to better serve the fusion research community and is currently being integrated with the Fusion Data Platform (FDP). The FDP, currently under development by a team led by General Atomics, is a comprehensive, open-access infrastructure designed to streamline the storage, processing, and sharing of fusion energy research data, facilitating collaboration and accelerating scientific discovery in the field. We present the latest advancements in TokSearch, focusing on its integration within the Fusion Data Platform project, and the substantial improvements in data processing capabilities it offers.
Key to these advancements is the introduction of a new, more versatile pipeline abstraction that supports arbitrary data processing tasks. This enhancement is underpinned by an updated application programming interface (API) that incorporates a flexible plugin framework, enabling seamless extension and customization of data processing functions. This new architecture facilitates the integration of diverse data sources and processing methods, significantly broadening the applicability of TokSearch in fusion research workflows.
Benchmarking results highlight the efficiency gains achieved through these updates, demonstrating orders of magnitude improvements in the retrieval and processing of DIII-D experimental data. These improvements are attributed to the optimized use of parallel file systems, which enable rapid access and manipulation of large datasets, along with integration of multiple distributed computing frameworks, providing flexible deployment options in high performance computing environments.
The integration of TokSearch into the FDP exemplifies how the project will provide powerful, scalable tools for data-driven research in fusion energy. Details of the FDP project will be provided, with a particular focus on how TokSearch's capabilities are leveraged to support large-scale data processing tasks.
Work supported by General Atomics’ Internal Research and Development Funding and in part by the U.S. Department of Energy, Office of Science, Office of Fusion Energy Sciences, using the DIII-D National Fusion Facility, a DOE Office of Science user facility, under Award No. DE-FC02-04ER54698, along with Office of Fusion Energy Sciences Award No. DE-SC0024426.
Disclaimer: This report was prepared as an account of work sponsored by an agency of the United States Government. Neither the United States Government nor any agency thereof, nor any of their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States Government or any agency thereof. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Government or any agency thereof.
The Wendelstein 7-X fusion experiment is an optimized stellarator, equipped with more than 60 different diagnostic systems. Many of those diagnostics comprise optical and/or infrared spectrum sensor arrays of various technologies (cameras). In order to enable full compliance with accepted scientific methods, all data generated by these systems (during experiments) are stored in the W7-X ArchiveDB for 10+ years.
Prior to the operational phase OP2.1 conducted in 2022/23, all video data acquired by W7-X diagnostics has been stored in W7-X ArchiveDB as raw, uncompressed data. A growing number of cameras and camera based systems, rising frame rates and special resolutions induced challenges both in terms of network bandwidth and storage capacity. Increasing experiment run times to 30 minutes and above, combined with the ensuing necessity to ingest the video data into the archive in real time amplify these issues even further.
While other institutions chose to opt for storage of only selected data and doing so utilizing lossy of-the-shelf video compression schemes optimized around human perception, both approaches were deemed unacceptable by scientists at Wendelstein 7-X. After a detailed analysis of available open source and commercially available compression software, no pre-existing solution for all requirements could be found in the market. Consequently, researchers at W7-X teamed up with compression experts at Google Research to define a customized algorithm and data format. The core requirements were:
The result: Fusion Power Video.
This talk (and subsequent paper) describes the process of the development as well as the algorithms and data structures used by/with FPV and its integration into the W7-X archive infrastructure. In arguing the benefits and challenges of the solution, it is also a call for acquiring and processing generic camera data based on a first principles approach - to work with standardized, processing-optimized fixed point binary representations of relative intensity: left-aligned 16 (or 8) bit integers.
A new cloud platform to realize plasma/fusion experimental data ecosystem, named "Plasma and Fusion Cloud," has been technically verified on some fundamental issues. Enormous amount of diagnostic data require a high-performance computing (HPC) platform not only for the LHD physics data analyses and also for the next-generation experiments, such as ITER and JT-60SA. Performance evaluation have been made at NIFS by using the HPC supercomputer "Raijin" and the LHD primary data storage system, both of which are directly connected by 100 Gbps Ethernet optical link. The test results show that almost full bandwidth can be used by means of multiple parallel streams.
Commercial or academic clouds are also very promising as a high-performance data computing platform not only for the physics data analyses but also for the real-time plasma and plant controls. AWS (Amazon Web Service) S3 cloud storage has accepted the proposal to store all the 2.0 peta-byte of compressed LHD physics data for open access, under the AWS Open Data Sponsorship Program (ODP) [1]. AWS is also one of the commercial providers of computing clouds in the framework of NII’s Research Data Cloud (RDC) [2] in Japan, allowing LHD data users to increase or decrease computing power they need on demand, simply by paying for the CPU hours used.
In order to make plasma and fusion diagnostic data "FAIR" [3], all the LHD’s diagnostic data objects are now under way to be registered with the digital object identifiers (DOI) for each acquisition node and plasma pulse, e.g. https://doi.org/10.57451/lhd.bolometer.123456.1. In 2023, more than 1.2 million DOIs have been issued for the LHD diagnostic data. There still remain more than 20 million unregistered diagnostic and analyzed data objects, the registration work would continue for at least the next few years.
To provide API access methods to the data users on the Internet, the APIs must be able to properly control read and write access privileges for privileged and non-privileged users and groups, even though all the LHD data are open publicly. The necessary modifications on the data server have been implemented applying the "gRPC" secure framework [4]. The new gRPC-based data servers have successfully demonstrated their reliability and operability during the most recent 25th LHD campaign in 2024.
Those demonstrations and technical verifications done in this study clearly suggest what the next-generation fusion data research center should be based on the cloud technology.
This research was supported by MEXT as "Developing a Research Data Ecosystem for the Promotion of Data-Driven Science".
[1] Open Data on AWS, "NIFS LHD Experiment", https://registry.opendata.aws/nifs-lhd/ .
[2] NII Research Data Cloud, https://rcos.nii.ac.jp/en/service/ .
[3] FORCE11, "The FAIR Data Principles", https://force11.org/info/the-fair-data-principles/ .
[4] gRPC, https://grpc.io/ .
Interpreting diagnostic data promptly after it is recorded remains a crucial aspect of modern tokamaks where rapid evaluation of plasma performance during a pulse is essential for operational efficiency and the implementation of the scientific programme. In this work, we present recent progress in developing a demonstrator for an in-pulse processing workflow for ITER, from simulated magnetic measurement data to the live display of equilibrium reconstruction. This demonstrator is a fundamental step in developing a complete in-pulse data analysis and processing workflow that whilst designed for ITER is applicable to any device.
We start with a set of measurements of the magnetic field sources. These are taken from existing ITER scenario simulations and include the description of the magnetic systems (coil position, turns, geometry, etc). This requires knowledge of the poloidal flux from the desired ITER scenarios, together with the corresponding plasma current and the machine description of different components that determine the behavior (like passive structures, wall, and toroidal field coils). We use a Bayesian forward model that add statistical noise to the original signals, ensuring a more realist set of magnetic signals. This is the same model that will be used for inference, thus allowing an accurate comparison of modelling vs real measurements.
This data is used as input information to the real-time processes as implemented by the magnetics plant systems inside the Plant Operation Zone network, with the aim to simulate a complete signal acquisition chain of the magnetics diagnostic. From here, they are handled as real plant signals, being transferred to the external plant network, down sampled and utilized as the initial input data for a concise in-pulse analysis workflow. This transfer of data from the two networks (Plant Operation Zone and external plant network) is particularly important for testing, since they have different security requirements and performance, and all measured parameters will have to go through this communication channel in order to be analyzed and to contribute to the overall pulse performance analysis. From the external network, an equilibrium reconstruction is calculated, which is then displayed in the control room as a so-called Live Display. This reconstruction can be seen as a foundation step to the complete plasma analysis workflow.
In this work we give an analysis of performance, live down sampling efficiency, and robustness of the system, with emphasis on the Live Display use case. We also perform a validation of the process by comparing the inferred plasma current and equilibrium reconstruction with the synthetic signals used as the input for this process. We also give an overview of limitations and bottlenecks where the process clearly needs improvement before it is deployed in the production environment of a running experiment.
Our ongoing efforts involve expanding to more complete analysis workflows to ultimately develop a fully validated high performance in-pulse processing infrastructure for ITER.
Analyzing physics data from experimental fusion reactors is important for R&D of demonstration reactors. The Japanese fusion community is planning to transfer all ITER's raw data near-real time to the Remote Experiment Centre (REC) in Japan and provide it, along with a supercomputer to domestic researchers so that they can freely analyze ITER data. “Near real-time” transfer means that the transfer of one shot data is completed before the start of the next shot.
The generation rate of measured data in ITER is assumed to be 2 GB/s in the initial phase and 50 GB/s in the mature phase. For near real-time data transfer from ITER to REC, speeds equal to or greater than these are desirable. The duration of one shot in the initial phase is expected to be 500 seconds, and the amount of data will be 2 GB x 500 sec = 1TB.
If the transfer target is a single large file, the file transfer tool MMCFTP [1] can be used to transfer the file between Europe and Japan at a speed of about 100 Gbps. However, the physics data of experimental fusion reactors is a collection of files of various sizes from a large number of sensors. It is difficult to read, write, and transfer many small files at high speed. This is called the Lots of Small Files (LOSF) Problem.
In this presentation, we evaluate a method using virtual disks to achieve near real-time transfer from ITER to REC. A virtual disk is mainly used as storage for a virtual machine (VM), and is a file system on the VM, but a single file on the VM host. The acquired data is stored to the virtual disk on the VM, and then the virtual disk file is transferred at high speed to the REC by the VM host. After the transfer is completed, the virtual disk is attached to the VM on the REC, and the ITER data can be used for analysis.
We installed one server each at QST REC in Rokkasho and NII in Tokyo, and prepared a virtual machine for ITER's Codac Core System v7 on each. There are two L2VPN connections between REC and NII, one of which goes through Amsterdam, New York, and Los Angeles, and the round trip time (RTT) is about 374ms. Since the RTT between ITER and REC is about 250ms, we conducted the experiment in a more difficult transfer environment. A portion of the NIFS’s LHD data, approximately 1 TB, was used for the transfer data. There are various formats of virtual disks and various file transfer tools. We compared these combinations and evaluated them from the viewpoint of whether near real-time transfer is possible.
In this presentation, we will report the results of this experiment and briefly introduce some topics including Japanese academic network update.
References
[1] K.Yamanaka, S.Urushidani, H.Nakanishi, et.al. "A TCP/IP-based constant-bit-rate file transfer protocol and its extension to multipoint data delivery", FED Vol.89 No.5 770-774 (2014)
Currently, a large amount of knowledge has been accumulated in the field of Fusion Research. Long-term experience of several generations of scientists disposes in accumulated experimental databases, theoretical works, mathematical models, codes, literature, technical documentation etc.
Modern information technologies make it possible to consolidate such databases on a single information platform and organize mass processing of this data in one IT space. As an example of such commodity processing are: cross checking the computation codes, data mining, tasks of multivariant optimization and classification plasma discharges, construction of empirical models, modeling of the discharges etc.
In addition, such an information platform makes it possible to provide a standardized access to experimental data from existing scientific installations from a single place for scientists, engineers and other profile specialists from other institutes and research centers. Such solutions, in particular, are the result of the modeling of the Russian Remote Participation Center for ITER (Model Russion Remote Participation Center - RPC [2]), whose tasks at the construction stage of the ITER installation is also to include comprehensive testing of the functions of remote interaction between the technological and diagnostic systems of the ITER installation during commissioning stage.
Currently, in Russia, Common IT space for Fusion Research - FusionSpace.ru [1] – is being created. The main aim of this system is to unite main Fusion Research Scientific Institutes (physics, technology and materials) for performing joint research, development and construction activities in the field of controlled fusion.
The report presents FusionSpace.ru platform, which makes it possible to unite Russian institutions for the exchange of experimental data and knowledge, sharing mathematical codes and computing resources, as well as conducting joint experiments in the frame of Remote Control Room, including the experience of creating a model of the Russian Center for Remote Participation ITER
The work was done under contract with state corporation “Rosatom” №Н.4к.241.09.23.1036 and Task Agreement with International Organization ITER №IO/21/TA/4500000169.
[1] Infrastructural Hardware Platform of the Common IT Space
for Fusion Research (Fusionspace.ru); S. Portone, E. Mironova, O.Semenov, Z. Ezhova, E. Semenov, A. Mironov, A. Larionov, N. Nagorny, A. Zvonareva, L. Grigoryan, D. Guzhev, A. Nikolaev, I. Semenov, and A. Krasilnikov; ISSN 1063-7788, Physics of Atomic Nuclei, 2023, Vol. 86, Suppl. 1, pp. S1–S9.
[2] Approach to Remote Participation in the ITER experimental program. Experience from model of Russian Remote Participation Center; O. Semenov, L. Abadie, A. Larionov, L. Lobes, X. Mocquard, A. Mironov, E. Mironova, N. Nagornyi, S. Portone, N. Pons, I. Semenov, D. Stepanov; 13th Technical Meeting on Plasma Control Systems, Data Management and Remote Experiments in Fusion Research.
Currently operating fusion devices and future fusion reactors share common features such as complex systems, limited space, expensive components, and some nuclear-related aspects. Consequently, traditional component installation or maintenance is inefficient. The immersive virtual installation achieved on EAST provides researchers with functionalities such as scheme discussions and installation simulations before the actual installation. Issues related to low model accuracy, inability to provide early warnings for collision detection, and poor system comfort are addressed. The study focuses on a method for 3D scene reconstruction using binocular vision, enabling real-time reconstruction of EAST. Utilizing deep learning techniques, component recognition and distance calculations as well as collision warnings are investigated. Finally, a stereovision system is established, generating depth information through high-definition stereoscopic displays. Taking electromagnetic measurements for diagnosis component installation as an example, a remote component installation simulation system is developed, providing researchers with a virtual platform that integrates virtual and real component installation, enhancing the efficiency of component installation. Preliminary realization and design of remote component installation are described.
CODIS(Control Operation Data Intelligent System) has being developed for the control and operation of HL-3 and other nuclear fusion experimental device. Its purpose is to integrate all people and all subsystems involved in the fusion experimental device into a unified system. The entire system is divided into three layers, namely the personnel function interface layer, the CODIS Core layer, and the system function integration layer. Between these three layers are enterprise standards and docking operations with CODIS Core.
The personnel function integration layer is mainly based on WEB technology with cross-platform capabilities and a series of API interfaces provided to third parties to realize the interaction between participants in the fusion experiment, who is responsible for operation, commissioning, simulation, data analysis, management, delivery, integration, security, quality, maintenance etc., and CODIS Core.
CODIS Core is mainly composed of a series of services, frameworks and platforms written in Java that run on a virtual machine cluster, including full-stack security, intelligent platform, IoT platform, edge cloud network infrastructure, data access analysis and display, control/operation/monitoring business, personal and equipment security, device operation quality, workflow and more.
System function integration layer includes a universal physical fusion devices operating framework that abstracts the control operation characteristics of various types of physical fusion devices, a universal numerical fusion device operation framework that abstracts the operation characteristics of various simulation codes, and a universal digital twins operating framework based on the operation behavior characteristics of physical devices and numerical devices. It also includes MINI-CODIS, which is designed to provide commissioning platform for third parties to develop the contractual subsystem outside the fusion device.
CODIS is already operational on HL-3 and some other fusion devices in China.
An upgrade of the TCABR tokamak ($R_0 = 0.62$ m, $a \leq 0.18$ m, $I_p \leq 120$ kA and $B_0 \leq 1.1$ T) is being designed to enable the generation of a well controlled environment to assess the impact of resonant magnetic perturbation (RMP) fields on edge localised modes (ELMs). This impact can be investigated over a broad range of (i) plasma shapes, (ii) RMP coil geometries and (iii) perturbed magnetic field spectra. To address this issue, a unique set of in-vessel RMP coils was designed and, in this work, their conceptual design is presented. This set of coils is composed of three toroidal arrays of coils on the low field side and three toroidal arrays of coils on the high field side. Each of these six toroidal arrays is composed of 18 coils, hence, enabling the application of RMP fields with toroidal mode numbers $n \leq 9$ to control/mitigate ELMs. To study dynamical effects of RMP fields of different toroidal mode numbers, all rotating simultaneously with different velocities, each of the 108 RMP coils will be powered independently by power supplies that can provide voltages of up to 4 kV and electric currents of up to 2 kA, with frequencies varying continuously from 0 Hz (DC) to 10 kHz. A set of physical criteria was used to determine the optimal coil geometry and their respective number of turns to reduce the coil currents and voltages during operation with alternate current. The conceptual design of the RMP coils was executed using the so-called vacuum approach and the linear, single-fluid plasma response model implemented in the visco-resistive MHD code M3D-C$^1$. Work supported by the Ministry of Science, Technology and Innovation: National Council for Scientific and Technological Development - CNPq.
In tokamaks, the superposition of a toroidal magnetic field, due to external coils around the entire torus, and the poloidal field, generated by the plasma current itself, is responsible for the plasma confinement. An interesting situation is when the magnetic field is time-independent, as is the case in MHD equilibrium configurations. For a symmetric plasma equilibrium configuration with an ignorable coordinate, i.e., the toroidal angle in tokamaks, the magnetic field line equations can be cast in the form of canonical equations, if the ignorable coordinate plays the role usually assigned to physical time in classical mechanics$^{[1]}$. As the magnetic field is divergence free, we can describe the field lines using an area-preserving map, with respect to a section of the torus at a fixed toroidal angle$^{[2]}$. The resulting phase space of the field lines is identical to a Hamiltonian phase space, indicating that the field lines act, at least locally, as trajectories. The analogy between the Hamiltonian formalism and the equations for the magnetic field lines is extremely useful since we can use the methods of Hamiltonian theory to interpret the results and characterize the dynamic regimes observed in experiments and computational simulations. Magnetic field lines are a non-mechanical example of a system that can be described by the Hamiltonian formalism$^{[3]}$. From the variational principle, we were able to present the description of field lines in confined plasmas for different coordinates and with the inclusion of an external perturbation.
[1] Bernardin, M. P., & Tataronis, J. A. (1985). Hamiltonian approach to the existence of magnetic surfaces. Journal of Mathematical Physics, 26(9), 2370-2380.
[2] Morrison, P. J. (2000). Magnetic field lines, Hamiltonian dynamics, and nontwist systems. Physics of Plasmas, 7(6), 2279-2289.
[3] Viana, R. L., Mugnaine, M., & Caldas, I. L. (2023). Hamiltonian description for magnetic field lines in fusion plasmas: A tutorial. Physics of Plasmas, 30(9).
JET pulses are regularly disrupted by the iterative process of tuning the gas controllers. Current gas calibration methods cannot accommodate the time-varying parameters of the gas system, which leads to poor repeatability of experiments. Therefore, there is a need for improved gas control algorithms. Developing these algorithms requires a significant amount of testing to evaluate performance. This testing must be done during a pulse and the controller will need re-tuning after any major change to the plant’s operation. As such, creating and maintaining these gas controllers costs huge amounts of machine pulsing time and, as a result, money. It is also particularly challenging to compete for operational time during DT campaigns, so a project was undertaken to produce a model of the plant that would enable offline testing of control algorithms to save resources.
This paper outlines the framework chosen to create this model, the process of refining the input data, the approach to training, and the statistical methods for testing. It describes the use of deep artificial neural networks to perform predictive analysis on historic JET data, as well as how uncertainty quantification was applied to the model to validate the output. It also highlights how similar techniques could be used to produce models of other plant systems to build up a digital twin.
The successful operation of tokamak devices, such as ITER, depends on effectively managing disruptive events. These occurrences can abruptly terminate discharges and trigger thermal and current quenches, posing severe threats to device structural integrity. Thus, precise disruption budgeting is essential to achieve operational objectives.
Disruption damage is quantified through a disruption budget consumption (DBC) approach, evaluating the electromagnetic and thermal load released during disruptions under various plasma conditions. DBC serves as a measure of the potential "cost" incurred by disruptions, which cumulatively affect device lifetime. Accurate DBC formulation is crucial for achieving low disruption rates and high mitigation success rates.
Prediction and mitigation strategies are important for reaching operational goals. Achieving target disruption rates necessitates the development of avoidance and prediction strategies, while effective mitigation depends on reliable disruption prediction techniques and efficient mitigation measures. Accurate prediction relies on robust models that leverage DBC-informed insights and experimental data.
Disruption damage comes from thermal and electromagnetic load. Thermal load mainly affects first wall integrity, with pre-thermal quench parameters identified as critical. Electromagnetic load arises from current quenches, particularly affecting vacuum vessel and inducing eddy currents in the first wall.
For DBC quantification, key parameters include plasma current, toroidal field, radiation power, and current quench rate. Training datasets should contain a large range of operational scenarios while ensuring device safety lifetime. Machine learning models trained on DBC-informed datasets enhance disruption predictive capabilities with limited DBC on device. Future device operation benefits from DBC-guided discharge management, assigning a "cost" to each discharge to optimize data collection while protecting the device.
Wendelstein 7-X – as the world’s largest stellarator-type fusion device – has been going successfully through its first operation phases showing reliable operation assisted by the W7-X Segment Control and experiment-planning framework. During the upgrade phase for the full machine extent, not only the actively cooled divertor and first wall are being installed, but also a list of new diagnostics. Furthermore, many diagnostics, which in previous campaigns could only participate in the experiment operation via auxiliary triggers, are now more closely integrated to take advantage of the pre-checking for reasonability of program parameter settings, central event-based segment switching, continuous data streaming and monitoring, and standardized parameter logging.
While the Segment Control framework was implemented flexibly enough to cope with the growing number of integrated components to be controlled, the Experiment Program Editor Xedit had to be adapted for efficient setting of the many experiment parameters.
The core requirement remains that the experiment planner always has the possibility to get a complete overview of the involved components and to intervene if necessary. However, with an increasing number of systems, he is neither able to enter every parameter value nor does he know about the reasonableness of internal parameters in detail. Experiment planning must become a joint but coordinated task of the involved technicians and physicists.
With these new requirements, Xedit was extended to cope with "TaskLinks": externally configured program parts (tasks) of individual components can be linked into the planned sequence of the central experiment program. The preparation of these tasks is done using a local Xedit instance – in the same way, as the users are already familiar with from the creation of local programs for commissioning or calibration runs. Centrally, as with all integrated components, the linked components’ tasks are then visualized and all parameters are checked for limit violations or other pre-defined rules before saving the complete planned program ready for executing.
All developments have been closely coordinated with the users, both the experiment leaders and those responsible for the components. The use of Xedit also for experiment planning at the WEST-Tokamak and the practical collaboration ensure the development of a generally applicable solution for distributed editing of experiment programs.
The experimental data generated by EAST during operation includes different types such as operation logs, control data, engineering data, diagnostic data and etc. Combined with visual operation requirements and data structures, multiple data service systems are established based on a single architecture. As the experiment continues to run, the scale of experimental data, system access load, and business complexity increase rapidly, the existing system is limited in maintenance and expansion, and the data lacks a unified access mechanism.
In view of the limitations faced by EAST data services, the EAST Integrated Data Access System (IDAS) is designed based on the Spring Cloud framework. Firstly, Vue is used to build a system portal to achieve cross-platform access to EAST long pulse diagnostic data, providing users with a unified entrance that supports cross-terminal access to EAST data services. Secondly, a data engine was built using Kafka as the message center to manage and monitor experimental data with more than 500 million signal records and a total volume of approximately 3000TB based on multiple indicators. Thirdly, a unified identity authentication center is built based on Spring Security to simplify the identity authentication process and realize multi-system single sign-on function. Finally, Spring Cloud components are introduced to establish a service governance mechanism to realize important functions such as micro service registration, health check, and service forwarding.
The IDAS system has been adopted in EAST experiment successfully and provides users with a comprehensive data service system that supports unified access to experimental data, experimental data management and monitoring, and unified identity authentication function.
The HL-3 Fusion Big Data Platform is a system developed on the open-source Hadoop platform specifically tailored for processing tokamak experimental data. Unlike traditional big data platforms dealing with service data periodically, massive amounts of data are generated by tokamak experiments typically within seconds or minutes. And these data are mostly transmitted and stored in binary format.
In this context, the HL-3 team has researched and developed a big data platform suitable for handling fusion experiment data from tokamak devices. This platform seamlessly integrates with existing tokamak data acquisition systems and database systems, effectively parsing, cleaning, and converting binary data into formats readily processable by downstream applications, while meeting the time response requirements of tokamak researchers for data processing.1
The Data Source component is comprised of three parts: real-time experiment data collected during tokamak discharges (e.g., coil voltage, current), engineering-related data associated with the tokamak device (e.g., device dimensions, temperature variations of the tokamak walls during experiments), and video and audio data captured during the experiment (e.g., infrared camera data of the discharge process).
The Data Integration section primarily utilizes data acquisition tools to periodically retrieve data files from a file server or read real-time experimental data from a high-speed cache.
The Data Process stage utilizes batch computation engine MapReduce and stream processing engines Spark Streaming/Flink to process data according to various service logics, subsequently storing the processed data in HDFS or Ceph as per specified requirements.
The Data Service component currently serves two primary scenarios: calculating physical metrics for scientific research by physics data analysts, and deriving basic feature data for AI developers to use in AI model training.
One of the primary challenges associated with plasma confinement in tokamaks is the escape of energetic particles at the plasma edge. One approach to managing the flux of these particles is modifying the electric and magnetic fields in this region, thereby establishing conditions that reduce transport. To investigate which configurations of electric and magnetic fields create transport barriers that mitigate transport and improve confinement, this study employed an area-preserving map describing the guiding center orbits of particles in the plasma. This map, derived from a Hamiltonian description of particle dynamics in the plasma, enabled the construction of Poincaré sections, which made it possible to analyze the characteristics of the system's dynamics under different configurations.
Turbulence dominates the radial transport at the edge region of tokamak plasmas, reducing magnetic confinement in fusion experiments, and its control remains a challenge in physics and engineering. Information theory can provide useful tools to quantify the degree of order/disorder of turbulent fluids and plasmas. In this work we analyze numerical simulations of a simplified nonlinear model of turbulence induced by drift waves in tokamak plasmas. By varying a control parameter we construct a bifurcation diagram of a transition from a turbulent regime to a regime dominated by zonal flows, in which turbulence is mostly suppressed. This transition is then characterized by computing the normalized spectral entropy of the turbulent patterns observed in the numerical simulations. Our results show that the turbulent regime displays a higher degree of entropy, the regime dominated by zonal flows is characterized by lower values of entropy, and the transition from the low-to-high confinement occurs abruptly. This work demonstrates that information theory can improve our understanding of the turbulent fluctuations that arise in the edge region of tokamak plasmas.
An $\mathbf{E}\times\mathbf{B}$ drift wave transport model was implemented to investigate chaotic transport at the edge of magnetised plasmas in tokamaks. We show that pronounced reversed-shear radial electric field profiles at the plasma edge can create shearless transport barriers (STBs) which confined most of the particle orbits inside the plasma. These barriers are related to the presence of extreme values of the rotation number profile and their behaviour enables us to identify confinement regimes for the chaotic transport as a function of the amplitude of the electrostatic fluctuations and the radial electric field intensity at the plasma edge. We found that, as the radial electric field increases, the STBs become more resistant to perturbations, enabling access to an improved confinement regime that prevents chaotic transport. In some way, strictly qualitative, we are seeing an L-H transition through the description of shearless transport barriers which, analogous to experimental results, exhibit better confinement regimes for larger radial electric fields at the plasma edge. In particular, the transition curve in the parameters plane associated with the STB has a fractal structure, thanks to the non-integrable nature of the associated Hamiltonian.
Plasma instabilities are still a concern when thermonuclear conditions are approached as they can impose severe constraints on the maximum achievable plasma performance. When operating in the so-called high confinement mode (H-mode) a very steep plasma pressure profile is formed in the plasma edge, which leads to repetitive instabilities known as edge localized modes (ELMs). The crash of these modes leads to
high transient heat fluxes onto the divertor plates, significantly reducing the lifetime of it’s components. Experiments have demonstrated that externally applied resonant magnetic perturbations (RMPs) can be used to control the plasma edge stability thus providing a way to trigger ELMs prematurely. When field lines from inside the perturbed plasma volume connect to the divertor targets, structures termed magnetic footprints appear on the plates, delimiting the spots over which most of the exhausted heat and particles are deposited. A significant upgrade of the Tokamak `a Chauffage Alfv´en Br´esilien (TCABR) is being designed to make it capable of creating a well controlled environment where the physics basis behind the effect of RMP fields on ELMs can be addressed. The core of this upgrade corresponds to the design and construction of an innovative set of 108 in-vessel ELM control coils, installed both on the high field side (HFS) and on the low field side (LFS). Modeling the magnetic field produced by these non-axisymmetric coils is a fundamental step for their design and, of course, their use during future experiments. In this project, to estimate the vacuum magnetic field produced by this set of coils in the plasma region, the geometry of the conductors of each coil is modeled using sufficiently small rectilinear segments of current. Subsequently, the magnetic field created by each segment is calculated using the Biot-Savart law. With both the geometry and the electric current of each coil being given, the perturbed magnetic field distribution inside the TCABR vacuum vessel can be calculated for various plasma scenarios, including the field spectra. Controlling the intersection of magnetic lobes with divertor target plates is also an important issue to maintain the integrity of the plasma facing components as it controls the levels of heat and particle deposition on the target plates surface. Therefore, the calculated perturbed magnetic field is also used to model the separatrix splitting, magnetic lobes and footprints for various plasma scenarios in TCABR.
In this work, a nonlinear model is introduced to determine the vertical and horizontal position model of the plasma column in Damavand tokamak. Using this model as a simulator, a multi-input and multi-output (MIMO) controller has been designed. Also this controller is implemented on digital signal processor (DSP) control system.
In the first stage, a nonlinear model is identified for plasma vertical and horizontal position, based on the multilayer perceptron (MLP) neural network (NN) structure. Estimation of the model parameters has been performed by back-propagation error algorithm using Levenberg–Marquardt gradient descent optimization technique. The model is verified through experimental data of plant. As the second stage, a MIMO controller is designed for model. Finally, the MIMO controller was implemented to simultaneously control of plasma vertical and horizontal position based on DSP processor. The practical results show appropriate performance of this controller.
In the control system, a real-time control modules were established based on TMS320C6717B DSP. Thanks to them, implementation of the classic and linear and nonlinear intelligent controllers was possible to control the plasma position parameters. This new platform can improve the quality and quantity of research activities in plasma physics for Damavand tokamak.
keywords: Tokamak, Plasma, Neural network modeling, MIMO controller, DSP.
For certain discharge configurations in tokamaks, transport barriers reduce particle transport, thereby improving plasma confinement. In this context, a model has been applied to describe turbulent transport caused by drift waves at the plasma edge, attributing this transport to chaotic orbits originated from $\textbf{E}\times\textbf{B}$ drift. In the present work, we use this model to investigate the influence of the magnetic safety factor profile on the onset, maintenance, and destruction of these particle transport barriers. The model yields a set of Hamiltonian differential equations that describe the motion of the guiding center of a test particle at the plasma edge, which are integrated numerically. We analyze the global behavior of trajectories using Poincaré sections. Introducing a nonmonotonic safety factor profile leads to a deep modification of the phase space structure, resulting in the appearance of shearless curves, robust against electrostatic fluctuations; therefore, they act as a transport barrier in phase space preventing chaotic orbits from escaping. The results of this work indicate that multiple shearless transport barriers can emerge in such advanced plasma discharges, with the safety factor profile acting as a triggering mechanism for such barriers.
Texas Helimak is a toroidal magnetic confinement device in a configuration known as Simple Magnetic Tori (SMT). Its simple geometry and wide radial region provide plasma density and temperature conditions analogous to those in the edge and Scrape-Off Layer (SOL) of tokamak devices. This allows for the evaluation of different mechanisms for effectively controlling turbulence. One such mechanism used to reduce turbulence and, consequently, anomalous transport is the modification of the radial electric field profile by imposing an external electric potential (bias). This can be achieved by installing an electrode or, as in the case of the Texas Helimak, using a set of plates (referred to as bias plates). In this work, we implement tools for the analysis and interpretation of data obtained through a set of Langmuir probes along the radial coordinate in experiments conducted on the Texas Helimak under different bias conditions to evaluate the effect of bias imposition on turbulence characteristics in the Helimak.
My masters dissertation, which titles this work, had the main goal was to reproduce existing results regarding plasma radial density profiles in the Texas Helimak device and expand such results for data not analyzed before.
In the Texas Helimak, a set of 96 Langmuir probes connected to a digitizer device of 500 kHz of sampling frequency register the data of the plasma's ionic saturation current for plasma discharges of about 10 seconds of typical length. In order to obtain the density profile, it is first necessary to treat the original data of ion saturation current and extract coherent information about its mean, standard deviation, skewness and curtosis. For the analysis in question, only the probes of higher vertical positions are of interest and once the data set of each discharge contais about 5 million data per probe, it's convenient to divide the full set of data into a series of subsequent smaller windows of 1000 data each, calculate the relevant statistical estimators for each window and take the median over the calculations for every window to avoid distortions caused by the measure fluctuation. Figure 1 shows the comparison of the statistical estimators calculated over the full time series and via median of smaller windows for one of the analyzed discharges
Fig.1: https://drive.google.com/file/d/1eMFZAwdyu4Peclf24L3CE19h41yfx1sX/view?usp=drive_link
With this, it's possible to introduce the estimator for the turbulence level of the plasma confinement, calculated as the reason between the ion saturation current's standard deviation and its mean. By adjusting the radial position Rmax of the occurrence of maximum mean ion saturation current through a second-degree polynomial, it's possible to centralize the radial profiles of mean ion saturation current (proportional to plasma density), its standard deviation and the turbulence level over the radial position relative to Rmax for different discharges with different electric current (Ic) values. Figure 2 shows the comparison of those three statistical estimators for five different discharges, where a shared tendency regarding the Rmax position in both growth of turbulence level and peak of standard deviation are observed for all the discharges.
Fig.2: https://drive.google.com/file/d/1a__o6KzkEiK1XZoWm0nj4j4w7dJ4kEYf/view?usp=drive_link
After that, the Probability Density Functions for the ion saturation current can be estimated as the contour of the histogram of normalized ion saturation current for some radial position. Probes in the radial positions immediately to the left and to the right of Rmax were chosen as representatives for, respectively, the high field and low field sides of the confinement, and figure 3 shows the comparison of those representative PDFs for five different discharges.
Fig.3: https://drive.google.com/file/d/12i5L6p_EiwILTFYbt3lHx0tOHf8SuU8M/view?usp=drive_link
Those analysis methods were then applied for other sets of discharges of the Texas Helimak, obtaining equivalent results for the turbulence and density profiles. This poster presentation will discuss the applied methods and the results obtained for the discharges analyzed.
In fusion plasmas, the stickiness effect manifests as the prolonged trapping of magnetic field lines in a specific region for many toroidal turns, significantly impacting plasma transport. We utilize a concept based on recurrence plots, which unveils the presence of a hierarchical structure of islands around islands where chaotic orbits become trapped. This analysis is performed on a Hamiltonian system describing the magnetic field lines within a Tokamak. Furthermore, we can differentiate between various levels of this structure and calculate the cumulative distribution of trapping times
A model based on the theoretical framework of drift wave turbulence is utilized to investigate anomalous transport in plasma magnetic confinement experiments. The drift wave model incorporates a perturbation consisting of an infinite number of spatial modes and a broad spectrum of frequencies, which is presented in such a way that it provides a Hamiltonian approximation, leading to a non-linear map. Investigations show that the radial profiles of the electric field, safety factor, and parallel velocity play an important role in transport, leading to topological rearrangements in phase space, among others, the presence of the shearless curve. In this work, we explore the influence of many spatial modes on the shearless barrier in terms of the control parameters using the winding number profiles, recurrence times, and transmissivity parameter space. In addition, even after the shearless destruction, the transport-blocking effect persists in its neighborhood to some extent, a consequence of the stickiness performing in the region.
An upgrade of the TCABR tokamak ($R_0=0.62$ m, $a=0.18$ m, $I_p \leq 120$ kA e $B_0\leq1.1$ T) is being carried out to allow for studies of the impact of resonant magnetic perturbation (RMP) fields on plasma instabilities known as edge localized modes (ELMs). ELMs may impinge heat fluxes on the surface of plasma-facing components that are typically well above the values supported by the existing advanced materials. It has been shown that RMP fields of relatively small amplitude may reduce heat fluxes caused by ELMs to values below the allowable thresholds supported by plasma facing components. To study the impact of RMP fields on ELMs in TCABR plasmas, an innovative set of in-vessel RMP coils are being designed. In this work, the mechanical design of those coils is presented. For the coils to work properly, a set of engineering requirements must be met. Firstly, magnetohydrodynamic (MHD) simulations using the non-linear two-fluid resistive MHD code M3D-C$^1$ show that, to meet the physical requirements, the coils must withstand currents as high as 60 kA-turn. Also, the coils will have to operate with both direct and alternating currents, at frequencies up to 10 kHz. Due to their relatively high self-inductances, the coils will have to withstand peak voltages of up to 4 kV. Since these coils will be subject to a strong magnetic field (about 2.25 T), they will also experience strong magnetic forces (as high as 10 kN). Finally, since the coils will be installed inside the vacuum vessel, the materials and processes employed in their fabrication must be compatible with high vacuum (p $\geq 1\times 10^{-8}$ mbar) and withstand temperatures of about 200$^\circ$C during wall conditioning. All these requirements make the design and construction of this set of coils a significant challenge. Due to the complexity of the system, the design of the coils is being carried out with a transdisciplinary approach and support from multi-physics simulations in finite-element Ansys software. The maximum equivalent von-Mises stresses obtained for the proposed mechanical design satisfies both ASME and ITER criteria.
This work addresses a pending issue in developing thermonuclear fusion: the triggering of neoclassical tearing modes (NTMs) by sawteeth (ST) in tokamak plasmas. Although ST and NTMs have been intensively investigated in recent decades, a quantitative and validated theory of ST-triggered NTMs is yet to be developed. Specifically, it is not possible to reliably predict the triggering of NTMs in scenarios of ITER operation with high plasma current and sustained operation with power gain factor $Q=10$. In this work, a study is carried out that aims to an improvement of the current understanding of the physical mechanisms behind the triggering of NTMs by ST, combining a theoretical/computational approach in the TCABR tokamak, which is operated by the Plasma Physics Laboratory of the Institute of Physics from the University of São Paulo. The M3D-C$^{1}$ code, a state-of-the-art code developed at the Princeton Plasma Physics Laboratory, USA, is used to model plasma evolution. It is expected that the knowledge acquired during this work will lead to the development of strategies that inhibit the coupling between ST and NTMs. This model can then be used predictively to indicate safer operating zones with higher plasma pressure in tokamaks.
\textbf{Keywords:} Thermonuclear fusion, neoclassical tearing modes, sawteeth oscillations, tokamaks, plasma physics.
The turbulence in magnetically confined plasmas exhibits certain universal properties with the presence of coherent high-density structures, generically called bursts. These structures propagate convectively and, therefore, have a relevant impact on the plasma confinement. In the classical Stochastic Pulse Train Model (SPTM), it is considered that bursts appear randomly, therefore, the interval time between them should follow an exponential distribution. But in a recent study, it was observed that when a positive bias is applied during shots in the Texas Helimak, a simple toroidal plasma device, in addition to have an increase in the number of detected bursts, it was also shown that there is a time correlation between successive bursts and it was proposed that it should follows a gamma distribution, with parameters of shape, k, and scale, s, which relate to the time interval between successive pulses,$\tau$, by $\overline{\tau} = ks$ with variance $V_{\tau} = k s^{2}$. The evidence of this correlation can be seen as a presence of peaks in the histograms of time interval and, also in the Power Spectrum Density of Ion saturation current fluctuation obtained by Langmuir probe, as well in the conditional average of bursts. Moreover, it has been noticed through simulations that there are two types of pulses regimes acting on plasma, one is the time correlated pulses whereas the other is the uncorrelated pulse background.
At the National Institute for Fusion Science (NIFS), “Plasma and Fusion Cloud” concept is currently underway. This concept is aims to establish a data analysis environment not only fusion plasma experiment, but other fields that transcends the boundaries of disciplines among universities in Japan, promoting the reuse of results such as experimental data obtained from existing experiments and analysis programs that have been developed.
The Open Data Server [1], which is part of this initiative, is already in operation, and data collected from LHD experiments are now available to the public.
These data were collected and analyzed during 25 cycles of experiments and are registered in the analysis server as analytical data, totaling more than 20 million items of data from 1,000 different measurements.
The data is being made open not only to LHD experimenters but also to a wide range of related fields such as fusion research, plasma physics, and condensed matter physics, as well as to the information science field as big data for research promotion.
In addition, the Open Data Server will provide a variety of data, not limited to fusion experiments, and is currently releasing not only LHD experimental data but also data obtained from the Aurora Observation Project [2].
For data analysis in LHD experiments, AutoAna is used to automate the launch of analysis programs. These programs are executed in containers using Docker, and by increasing or decreasing the number of execution containers as necessary, quasi-real-time analysis is realized during the discharge cycle. Container execution is performed by more than 20 PCs, but there are problems with proper allocation of computing resources, maintainability, scalability, etc. To solve this problem, we are planning to run containers by using large-scale clusters or cloud computing. This will enable flexible allocation of computational resources and facilitate the execution of analysis programs even by researchers who do not directly participate in LHD experiments.
Specifically, it is intended to be used in commercial cloud services and academic cloud infrastructures such as mdx. To verify the effectiveness of this system, we established a network between the supercomputer system Raijin and the raw data management system at NIFS, and developed an environment to run analysis programs using experimental data on the supercomputer. In addition, last year, we concluded an open data sponsorship with Amazon, Inc. to facilitate the use of data from the cloud [3], and we plan to copy the data obtained from the LHD experiments, including raw data, to cloud storage and make them available via the Internet.
In this presentation, we will give an overview of these plans and introduce the current status of the project.
[1] LHD experiment data repository, https://exp.lhd.nifs.ac.jp/opendata/LHD/
[2] Aurora Observation Project, https://projects.nifs.ac.jp/aurora/en/
[3] Open Data on AWS, “NIFS LHD Experiment”, https://registry.opendata.aws/nifs-lhd/ .
Robust control of tokamak plasma is still one of the most challenges for a fusion reactor due to the complicated plasma dynamics together with its response with complicated structures and actuators and the extreme control requirements. In recent years, artificial intelligence showed its great potential in predication of plasma states and in control of plasma equilibrium. On EAST, all disrupted shots have been collected to establish a disruption database. Then AI models by CNN, LSTM, Random Forest and XGboost were trained to predict disruptions by impurity burst, MARFEE and other unknown reasons. Cross machine disruption prediction has been done with Alcator C-mod and DIII-D disruption database in collaboration with MIT team. In order to get more accurate and quicker real-time estimation of the plasma position and vertical growth rate, we trained a neural network model by using off-line EFIT equilibrium data. Real-time performance and accuracy haven been verified in the experiments. By using the model by system identification to the plasma response with Low Hybrid Wave in the experimental to train reinforce learning (RL) model to train a controller, a reliable plasma pressure control was demonstrated. By using a neural-network model, which was trained by a rigid state space model of the plasma vertical response, to extract the controller parameters in real-time, a self-adaptive controller has been applied to experiments. More robust vertical control has been achieved. Moreover, to increase the robustness of the vertical stability control and get more reliable decoupling with the slow vertical motion, Deep Deterministic Policy Gradient algorithm was used to train another vertical controller. Robust Adversarial Reinforcement Learning was also used to train the controller to get rid of as much as possible the possible fast control coil over current caused by the perturbation by slow vertical motion. Simulation showed good performance of the controller.
Plasma with elongated configuration has the advantage of higher discharge parameters while at the cost of vertical displacement instability. Once the vertical displacement is out of control, it will inevitably lead to a major disruption, causing great damage to the device, which will have unacceptable consequences if it occurs on ITER. Therefore, active control of vertical displacement is necessary. The vertical displacement is affected by the passive structure, power supply delay, etc., which is a high-order system with complex response. As the system control ability is limited, when the perturbations are complex and diverse, the requirements for robustness of controlling are high. Deep learning has a strong learning capability, so we used a deep reinforcement learning approach to achieve fast control of plasma vertical displacemen.
We first verified the feasibility of reinforcement learning to control plasma vertical displacement. We trained the vertical displacement controller using the Deep Deterministic Policy Gradient (DDPG) algorithm and tested its performance. After testing, we found that the dynamic response of the controller is better than the conventional PID control, but it is less resistant to PF coil current perturbations.
In order to increase the perturbation resistance of the model, we have adopted Robust Adversarial Reinforcement Learning(RARL). The strategy of RARL is to add an adversary who is also an agent, and the adversary will attack the weaknesses of the agent, so the agent needs to find the optimal strategy in the worst case scenario. It may be useful to refer to the DDPG-based RARL as DDPG-RARL. The traditional vertical displacement control cannot completely avoid the overcurrent of IC coil due to the perturbation of PF coil current. Therefore, in our work, the adversary attacks the controller by applying perturbations to the PF coil current based on the observations in the EAST.
We perform a comparative test of the model's resistance to perturbation by using an adversary to attack the DDPG-RARL-based controller, and then intercepting the attack pattern to attack the DDPG-based controller. We found that the training process yields adversaries with different characteristics. The adversaries can be categorized into two types, performing high-amplitude attacks and high-frequency attacks. We found that DDPG-RARL outperforms DDPG for both large amplitude attacks and high frequency attacks.
`
`
##The US goal (March, 2022) to deliver a Fusion Pilot Plant [1] has underscored urgency for accelerating the fusion energy development timeline. This will rely heavily on validated scientific and engineering advances driven by HPC together with advanced statistical methods featuring artificial intelligence/deep learning/machine learning (AI/DL/ML) that must properly embrace verification, validation, and uncertainty quantification (VVUQ). Especially time-urgent is the need to predict and avoid large-scale “major disruptions” in tokamak systems. This presentation highlights the deployment of recurrent and convolutional neural networks in Princeton's Deep Learning Code -- "FRNN" – that enabled the first adaptable predictive DL model for carrying out efficient "transfer learning" while delivering validated predictions of disruptive events across prominent tokamak devices [2]. Moreover, the AI/DL capability can provide not only the “disruption score,” as an indicator of the probability of an imminent disruption but also a “sensitivity score” in real-time to indicate the underlying reasons for the predicted disruption [3]. A real-time prediction and control capability has recently been significantly advanced with a novel surrogate model/HPC simulator ("SGTC") [4] -- a first-principles-based prediction and control surrogate necessary for projections to future experimental devices (e.g., ITER, FPP's) for which no "ground truth" observational data exist. Finally, an exciting and rapidly developing area that cross-cuts engineering design with advanced visualization capabilities involves AI-enabled advances in Digital Twins – with the FES domain providing stimulating exemplars. This has also witnessed prominent recent illustrations of the increasingly active collaborations between leading industries such as NVIDIA that enabled productive advances for tokamak digital twins with dynamic animations of the advanced AI-enabled surrogate model SGTC [4] and NVIDIA's "Omniverse" visualization tool [5]. More generally, the scientific merits of Digital Twins are well analyzed in the recent US National Academies Report on “Foundational Research Gaps and Future Directions for Digital Twins” [6].
REFERENCES:
[1] https://www.whitehouse.gov/ostp/news-updates/2022/04/19/readout-of-the-white-house-summit-on-developing-a-bold-decadal-vision-for-commercial-fusion-energy/
[2] Julian Kates-Harbeck, Alexey Svyatkovskiy, and William Tang, "Predicting Disruptive Instabilities in Controlled Fusion Plasmas Through Deep Learning," NATURE 568, 526 (2019)
[3] WilliamTang, et al., Special Issue on Machine Learning Methods in Plasma Physics, Contributions to Plasma Physics (CPP), Volume 63, Issue 5-6, (2023).
[4] Ge Dong, et al., 2021, Deep Learning-based Surrogate Model for First-principles Global Simulations of Fusion Plasmas, Nuclear Fusion 61 126061 (2021).
[5] William Tang, et al., 2023, AI-Machine Learning-Enabled Tokamak Digital Twin, Proceedings of 2023 IAEA FEC, London, UK (2023).
[6] https://www.nationalacademies.org/our-work/foundational-research-gaps-and-future-directions-for-digital-twins (2023).
The reliability of the plasma density measurement is crucial for plasma density
control in tokamak. Currently, the density feedback system in EAST uses lineaveraged density from either the Hydrogen Cyanide (HCN) laser interferometer
or the polarimeter-interferometer (POINT) diagnostic system. However, insufficient laser energy or noise interference can lead to erroneous density signal
diagnoses, i.e., abnormally high or low, which always results in misfunction of
density feedback control. To determine the reliability of the density signal used
for density feedback control and still obtain relatively reliable density information to meet control requirements, this study applied the ensemble learning
algorithm LightGBM to estimate plasma density information. The input sample parameters for training the LightGBM model include plasma stored energy
(Wmhd), plasma internal inductance (li), beta ratio (βN ), plasma shape elongation (κ), boundary safety factor (q95), loop voltage (Vloop), and plasma current
(Ip). The training dataset is collected from EAST experiments of 2024. From
this dataset, 80% of the sample data were randomly chosen as the training set,
the remaining 20% serving as the test set. The final results showed that the
density inferred by the LightGBM model on the test set samples had an average error within 5% compared to the experimentally measured density. This
LightGBM model is implemented in the plasma control system for real-time
calculation with an inferring time of around 20 µs.
The traditional algorithm currently used for plasma equilibrium reconstruction in tokamaks assumes a plasma current profile in a certain polynomial form (usually 2nd or 3rd order) or a tension spline function and performs a least square fitting to the diagnostic data under the model given by the Grad-Shafranov equation 1. The physics-informed neural network (PINN) integrates measurement data and mathematical models governed by parameterized PDEs and implements them through neural networks, so the networks can be trained from additional information obtained by enforcing the physical laws 2. The inverse problems solving with PINN on JET demonstrates its great potential for plasma equilibrium reconstruction with external magnetic diagnostics in tokamak [3].
Instead of using meshless method, a new physics-informed neural network with fixed fine grids was developed to reconstruct EAST plasma shape based on external magnetic diagnostics. The basic neural network architecture is composed of two separate hidden layer sections. The first section, consisted of 6 hidden layers with 20 neurons each, aims to predict the poloidal flux based on the given points on the specified grids as input. The second section has 2 hidden layers with 10 neurons each, which was designed to calculate plasma pressure and poloidal current flux from the poloidal flux. The selected loss function includes 5 parts: the predicted poloidal flux of flux loops and the predicted poloidal magnetic field of pick-up coils against the measured one, the residuals for Grad-Shafranov equation of grids inside and outside the plasma separatrix, the plasma pressure and poloidal current flux of points on plasma separatrix, the total plasma current difference between the predicted and the measured one, and the poloidal flux of flux loops and poloidal magnetic field of pick-up coils calculated by Green’s function of fixed grids and external magnetic diagnostics against the measured one. The calculation results using this new physics-informed neural network verify the effectiveness to reconstruct EAST plasma shape based on simulated equilibrium dataset w/wo random noises. Reconstructed results with magnetic diagnostics from EAST discharge confirm the reliability. The new PINN approach can provide another viable method to reconstruct the plasma shape for EAST tokamak.
References
[1] L.L. Lao et al., Nucl. Fusion, 25, 1611, 1985
[2] Karniadakis, G. E. et al., Nat. Rev. Phys, 3, 422, 2021
[3] Riccardo Rossi et al., Nucl. Fusion, 63, 126059, 2023
In Tokamak plasma, The instability of magnetohydrodynamic(MHD) severely limits the improvement of plasma parameters and may even lead to plasma disruption events, thereby threatening the safety of device components. The identification of MHD modes is crucial for the study and control of MHD instabilities.
Traditional MHD mode recognition methods mostly use the raw information of diagnostic signals for correlation calculation and analysis. The efficiency of building a large MHD mode database by manually identifying MHD mode information is relatively low. A data-driven neural network can discover correlations between data and efficiently process large-scale datasets. Therefore, using artificial intelligence methods to identify the three common MHD modes in EAST and building an MHD database in EAST is of great significance for statistical research on MHD modes in EAST.
In EAST experiments, there are three common MHD modes. 2/1 tearing mode, 3/2 tearing mode, and 1/1 fish-bone mode. The diagnostic signals which MIRNOV and soft X-rays signals are commonly used for MHD analysis. Based on the output results of the diagnostic signals, typical plasma discharge shots including three different MHD modes are selected to build an MHD mode recognition data set. Two kinds of machine learning algorithm experiments were conducted based on the MHD data set. Firstly, an MHD classification model was built based on BP neural network, and sample shots were selected for training and testing. An average accuracy of 91.16% was achieved on the test set. The test results shows that the BP network is effective for 2/1 and 3/2 tear mode, but is not very good for fish-bone mode. Afterwards, another optimized MHD mode recognizer was developed based on time convolutional network and long short-term memory network. We name the recognizer “Temporal Convolutional Hybrid network Recognizer MHD,TCHI-MHD”. The tested results of TCHI-MHD are better than BP network after sample training. An average accuracy of 98.38% on the test set. In addition, the recognizer was expanded to analyze MHD frequency and amplitude. A preliminary EAST MHD database was built base on the TCHI-MHD.
High-performance disruption prediction and instability event identification are crucial for tokamak plasma operation. Given the intrinsic correlation between plasma disruptions and their precursor instability events, this study introduces a multi-task learning-based integrated model that concurrently processes both tasks. The model identifies three key instability events—Edge Localized Modes (ELMs), Multifaceted Asymmetric Radiation from the Edge (MARFE), and H-mode to L-mode transitions (H-L Back Transition)—while offering disruption predictions. Testing on the Experimental Advanced Superconducting Tokamak (EAST) database revealed the model significantly lowers computational costs and improves performance in prediction and identification compared to single-task methods. The model can be further extended to incorporate more plasma state characterization tasks, which is significant for the steady operation of tokamak plasmas.
The database for this study consists of 12 plasma signals such as plasma current (Ip), current control error (Iperror), stored energy(Wmhd), input power (Pinput), and loop voltage (Vloop), derived from 816 randomly selected discharges during 2016 to 2023 experiments on the EAST. The dataset comprises 816 discharges, consisting of 432 non-disruption and 384 disruption discharges. Within these discharges, there are 128 instances of H-L Back Transition, 57 instances of MARFE, and 220 instances of ELMs, all annotated by experts.The entire dataset is divided into training, validation, and test sets in approximately a 7 : 1.5 : 1.5 ratio.
The model employs a sequence-to-label neural network architecture with a feature extractor and task classifiers, enhanced by a genetic algorithm for optimal hyperparameter selection. Test results demonstrate that our model outperforms traditional single-task learning methods in performance. Specifically, for the H-L Back Transition identification task, the model’s AUC value increased from 0.69 to 0.74; for MARFE identification, the AUC value improved from 0.74 to 0.82; for ELMs identification, the AUC value rose from 0.87 to 0.93; and for plasma disruption prediction, the model achieved an average warning time of 0.98 seconds, better than the single-task method’s 0.87 seconds, with the disruption prediction AUC value also increasing from 0.82 to 0.87. Moreover, the model greatly reduced the demand for computational resources. On a single NVIDIA RTX 3090 graphics card, the training time of our model is approximately 139 minutes, compared to a total of about 512 minutes for training four dedicated models using single-task learning methods under the same configuration.
Future research directions will focus on the real-time deployment of the model and its transfer application across devices, aiming to enhance the model’s universality and practicality. These studies not only give hope to further improve the precision and response speed of plasma control but will also provide important references for tokamak plasma research and other related fields.
The MDSplus[1][2] data management system is widely used in the magnetic fusion energy research community for data storage, management, and remote access. The system provides data access through a vector based, interpreter API. It was developed and optimized for rapid single shot analyses. Machine Learning applications require data from large numbers of shots and potentially from different experimental devices. We are developing tools to enable the rapid retrieval of limited sets of data from large numbers of shots. The system will cache the requested quantities in a data warehouse overnight, and be able to quickly provide them as inputs to machine learning tasks. The cache will eventually be both transparent and extensible. At this time, various caching mechanisms are being tested and benchmarked using the queries for approximately 100 quantities that are typically used by disruption-warning ML workflows. The performance of various caching schemes varies greatly depending on the environment they are deployed in. We provide comparisons of the performance of native MDSplus, HSDS[3] cache, and mongodb[4] cache in various environments. The end goal is to provide fast data access to commonly queried quantities regardless of the environment.
[1] Stillerman, J. A., et al. "MDSplus data acquisition system." Review of Scientific Instruments 68.1 (1997): 939-942.
[2] “MDSplus data system,” MIT Plasma Science and Fusion Center, April 2024, https://mdsplus.org/index.php/Introduction
[3] “Highly Scalable Data Service (HSDS)”, The HDF Group, https://www.hdfgroup.org/solutions/highly-scalable-data-service-hsds/
[4] “MongoDB”, MongoDB, Inc., April 2024, https://www.mongodb.com/
Plasma disruption presents a significant challenge in tokamak fusion, especially in large-size devices like ITER, where it can cause severe damage and economic losses. Current disruption predictors mainly rely on data-driven methods, requiring extensive discharge data for training. However, future tokamaks require disruption prediction from the first shot, posing challenges of data scarcity and difficulty in training and parameter selection during the initial operation period. In this period disruption prediction aims to support safe operational exploration and accumulate necessary data to develop advanced prediction models. Thus, predictors must adapt to evolving plasma environments during this exploration phase. To address these challenges, this study proposes a cross-tokamak adaptive deployment method using the Enhanced Convolutional Autoencoder Anomaly Detection (E-CAAD) predictor. This method enables disruption prediction from the first discharge of new devices, addressing the challenges of cross-tokamak deployment of data-driven disruption predictors. The E-CAAD model, trained on non-disruption samples and using disruption precursor samples when available, suits unpredictable data environments on new device. During inference, the E-CAAD model assesses input samples by compressing and then reconstructing them, using the reconstruction error (RE) to measure the similarity between the input and reconstructed samples. The model trained by ample data return smaller REs for normal samples and larger REs for disruption precursor samples, allowing for the setting of an RE threshold to achieve disruption prediction. Experimental results reveal significant differences in the REs returned by the E-CAAD model trained on the existing device for disruption precursor samples and non-disruption samples on the new device. Therefore, the model from the existing device can achieve disruption prediction for the first shot on the new device by adjusting the warning threshold. Building upon this, adaptive learning from scratch strategy and warning threshold adaptive adjustment strategy are proposed to achieve model cross-device transfer. The adaptive learning from scratch strategy enables the predictor to fully use scarce data during the initial operation of the new device while rapidly adapting to changes in operating environment. The warning threshold adaptive adjustment strategy addresses the challenge of selecting warning thresholds on new devices where validation and test datasets are lacking, ensuring that the warning thresholds adapt to changes in the operating environment. Finally, experiments transferring the model from J-TEXT to EAST exhibit comparable performance to EAST models trained with ample data, achieving a TPR of 85.88% and a FPR of 6.15%, with a 20ms reserved MGI system reaction time.
During long pulse steady-state discharge, the position and shape of plasma reconstructed by the EFIT code may produce significant errors due to factors such as integrator drift and local magnetic field changes, which in turn affect discharge stability. However, optical based boundary reconstruction signals are not affected by the complex electromagnetic environment within the Tokamak. The use of high-speed cameras can capture real-time visible spectral images of plasma discharge, and it is of great significance to stably identify plasma optical boundaries in real-time under various illumination conditions for long pulse steady-state discharge.
In this study, a plasma boundary detection algorithm based on Yolov8 algorithm is developed and a dataset for plasma optical boundaries has been established, containing image data under various illumination conditions, labeled through Retinex image enhancement algorithm. After training, the MPA (mean pixel accuracy) of the Yolov8 detection algorithm is 0.872 and the mIoU is 0.74 on the test set. However, the detection of the overall contour and detailed texture of the plasma optical boundary is still insufficient. Consequently, the model initially introduces P6 feature layer to augment the receptive field, enabling more effective capture of the comprehensive structural information of the plasma optical boundary. The MPA on the test set is 0.877 and mIoU is 0.75. Subsequently, CBAM attention mechanism is introduced in this model. The channel attention module is capable of discerning significant features within the multi-scale feature map, such as the internal texture information of the plasma optical boundary, to improve model segmentation accuracy. The spatial attention module concentrates on the main regions of the plasma optical boundary in the image and reduce the attention to the noise and occlusion parts, thereby enhance the model's ability to deal with noise and occlusion. Through subsequent testing on the same test set, Yolov8n-seg-p6-CBAM recognizer demonstrates improved performance, the MPA is improved to 0.901, mloU is improved to 0.79, and single detection time of the recognizer is 4.3ms on single NVIDIA 3090 GPU. Plasma optical boundaries can be accurately detected under different illumination conditions of plasma discharge image. This research offers a novel approach for long pulse steady-state control in future Tokamaks.
References:
[1] Yan, H. et al. Optical plasma boundary detection and its reconstruction on EAST tokamak. Plasma Phys. Control. Fusion 65, 055010 (2023).
[2] HAN, X. et al. Development of multi-band and high-speed visible endoscope diagnostic on EAST with catadioptric optics. Plasma Sci. Technol. 25, 055602 (2023).
[3] Woo, S., Park, J., Lee, J.-Y. & Kweon, I. S. Cbam: Convolutional block attention module. in Proceedings of the European conference on computer vision (ECCV) 3–19 (2018).
We perform numerical simulations of a simplified nonlinear model that describes drift-wave turbulence in tokamak plasmas. By changing the value of a control parameter related to adiabaticity, the numerical solutions display a transition from a turbulent regime, into a regime dominated by zonal flows, in which turbulence and radial transport are greatly reduced. This transition can be regarded as a low-to-high (L-H) confinement transition, in which the low confinement regime is related to the turbulent regime, and the high-confinement regime is related to the zonal flow regime. The chaotic mixing properties of the flow are characterized by means of Lagrangian coherent structures (LCS). We compute the finite-time Lyapunov exponent of the calculated velocity field derived from the electrostatic potential to better characterize the chaotic mixing of the turbulent and zonal flow regimes. These results can contribute to the understanding of turbulent transport processes in magnetic confinement fusion plasmas