Note: The meeting will take place virtually. Information on remote participation will be sent to all in due time.
The event aims to provide a forum to discuss new developments in the areas of plasma control systems, data management including data acquisition and analysis, and remote experiments in fusion research.
The event aims to bring together junior and senior scientific fusion project leaders, plasma physicists, including theoreticians and experimentalists, and experts in the field of plasma control systems, data management including data acquisition and analysis, and remote experiments in fusion research.
The ITER Interlock Control System (ICS) requires the application of the IEC61508 standard for all mission-critical (known as investment protection) control functions. Such functions at nuclear fusion facilities present a unique challenge where events from integrated physics processes need to be detected and distributed to actuators with hard real-time constraints in the order of single-digit milliseconds - sometimes microseconds.
Systems that can achieve these kinds of requirements are often bespoke FPGA-based solutions, which are a well-known challenge to IEC61508 processes. However, to minimize the variety of components and simplify the procurement process for an international supplier-base, ITER decided to standardize the use of off-the-shelf devices. This is where a third challenge arises, to provide the required level of assurance that an COTS device is of good quality, fit-for-purpose and can be integrated adequately into an investment protection control loop with the necessary level of systematic capability over the development process.
The COTS devices chosen by ITER for the realisation of hard real-time interlock functions, require the use of a high-level language, and the associated integrated development tools to develop the FPGA functionality. This supposes a fourth challenge, as IEC61508 processes are still oriented to Hardware Description Language-based developments rather than high-level languages, such as, OpenCL, HLS, Mathworks-Simulink or LabVIEW-FPGA being increasingly used every day.
This paper explores the method ITER use to meet these four challenges with reference to a case-study system architecture with fast, hard real-time requirements. The paper also presents successes and limitations in attempting to apply rigor throughout the system realization process with COTS devices and high-level languages.
High-speed sampling measurements with more than a giga-sample per second have rapidly become popular in fusion plasma experiments, where the phase delay of the timing signals, such as triggers and clocks, becomes relatively large and thus some delay compensation mechanism would be indispensable for the timing synchronization.
White Rabbit (WR) is the high-precision network time synchronization technology that has been developed and improved in the field of large accelerators physics. It is based on IEEE1588-2008 Precision Time Protocol version 2 (PTP v2) which is now widely used in many fields and industries.
While PTP v2 is capable of synchronizing to International Atomic Time (TAI) with sub-microsecond accuracy, WR can synchronize each Ethernet connected node with sub-nanosecond accuracy.
As the design specifications and the related information of WR are publicly available under the open hardware project, it is easily applicable to other experimental plants comparing with other industrial high-precision synchronization methods.
As a result of some technical surveys and functional verifications, it has been confirmed that the WR technology could be applied to the measurement and control system of fusion related experimental devices if some deficient functionalities such as divided clocks and group operations of multiple nodes would be additionally implemented.
SPIDER - the ITER full-size beam source built at the Neutral Beam Test Facility (NBTF) in Padova, Italy – has been in operation since June 2018. SPIDER’s mission is to optimize the operation of the beam source so that to reuse the SPIDER experience on the full-size prototype of the ITER Neutral Beam Injector, called MITICA, under advanced construction at the NBTF, and in the ITER heating neutral beam injectors.
The exploitation of SPIDER started with short, low-performance pulses lasting up to a few seconds and developed to obtain long pulses lasting up to 3000 seconds. Furthermore, the integration of plant and diagnostic systems has grown over time. The size of the amount of data collected and stored per pulse can provide a simple measure of evolution. In fact, it has gone from a few tens of Mbytes in the first campaign pulses to the current maximum value of over 150 Gbytes, most of which produced by infrared and visible cameras.
From the first operation onwards, the control systems have also evolved and consolidated, including components and functions, which were not initially foreseen or developed only in a preliminary form. This includes the progressive integration of plant and diagnostic systems and of protection and safety functions.
The paper initially focuses on the architecture of the SPIDER control systems that include CODAS, the system delivering conventional control and data acquisition and management, the central interlock system delivering plant protection, and the central safety system delivering people and environment safety. Since all systems have been developed following the guidelines of ITER for the implementation of control systems, the integrated SPIDER control, interlock and safety systems may provide an interesting example for the ITER plant system developers.
The paper then describes how the top-down definition and implementation of operating states and operational scenarios provide the framework for the integration of control, interlock and safety systems and the basic element for successful operation.
Finally, the paper reports on the lesson learned during these nearly three years of operation with particular attention to the progressive, continuous evolution and recommissioning of systems.
Korea Superconducting Tokamak Advanced Research (KSTAR) Fast Interlock System (FIS) event counter assigns counter values to various events occurring during plasma discharge. It is one of the KSTAR FIS functions that make it possible to check the order of event occurrence. It is made using the operation clock of the FPGA in synchronization with the timing signal received from KSTAR Time Synchronization System (TSS). Each event includes time information at the time of occurrence, and by analyzing this, the context of the event can be grasped. The counter has a resolution of 10 microseconds and is made to include almost all events related to the KSTAR plasma discharge. It was confirmed that collecting events occurring during plasma discharge at high speed and recording the occurrence time is very useful for debugging of Plasma Control System (PCS) and understanding device operation status. This paper will be presented including implementation and operation results of the KSTAR FIS event counter.
The Central Interlock System (CIS) is in charge of implementing the ITER Central Investment Protection Functions. A dedicated architecture based on hardwired loops will be responsible for the protection of the superconductive magnet system. These loops act in a transversal way connecting all systems directly involved in the protection of the magnets at the plant level. The base of the coordination between the hardwired loop and the different users is a common interface called DLIB (Discharge Loop Interface Box).
The IEC 61508 standard, which defines the ‘Functional Safety’ provisions for I&C systems, has been used as the guideline to define the lifecycle of the device, starting from the specification up to operation and maintenance of the Investment Protection Function performed by the DLIB.
The whole dependability of the DLIB has been improved and demonstrated though a detailed verification and validation process, including:
- Safety Integrity Level (SIL) analysis based on the FMEDA method
- Manufacturing tests to identify any issue related to the series production of the component.
- Early Stage Screening to identify latent defects.
- Qualification testing to identify any externally inducted defects.
- Accelerated life testing to emulate the end of life behavior of the DLIB.
The paper provides a summary for the whole process from the design up to the final validation of the Discharge Loop Interface Boxes (DLIBs) that will be used to coordinate the Fast Energy Discharge protection for the ITER magnet protection.
The data management system for the plant monitoring data has been developed for JT-60SA. The plant monitoring data are continuously acquired for 24 hours for the purpose of monitoring the condition of the hardware systems such as baking, cryogenic and vacuum exhausting systems of the JT-60SA tokamak. In the case of the previous device, JT-60, the plant monitoring data were not acquired into the common platform of the database system whereas those were accessible individually to each plant system only for a short period. Therefore, it was not adequate enough for monitoring.
For JT-60SA, the new database system has been constructed for integrating the control and management of the plant monitoring data of many hardware systems into one system. This database system provides the users with stable control and safe management of the data acquisition process of a set of all plant monitoring data without missing any data transaction. This system also provides an appropriate environment to compare easily the plant monitoring data of several hardware systems.
Most of the hardware systems acquire the plant monitoring data every one second, and these data are transferred every 15 minutes from the hardware system to the database. Each block data is associated with a unique serial number, which enables the administrator to confirm if some block data is successfully transferred in order. In case a certain block data is missing, the corresponding data will be inserted later into the appropriate location in the database according to the serial number.
The plant monitoring data is associated with its time base data at all times. For example, there are approximately 2000 kinds of data transferred from the superconducting coil system. However, only two time base data are acquired because basically the plant monitoring data of one hardware system are aligned in the same time base. Therefore, the time base data of the plant monitoring data are transferred in a separate line to prevent duplicate storage of the time base data.
The operation status and performance of the plant monitoring database system have been evaluated practically under the actual operation of JT-60SA. We have confirmed that our new database system is operated effectively by integrating the plant monitoring data of many hardware systems.
As a sub-project of the Broader-Approach activity between Japan and EU, preparation of ITER Remote Experimentation Centre (REC) is ongoing in Rokkasho Fusion Institute of QST, Japan toward remote participation in the ITER plasma experiments. In this study, current proposals of REC system, including a segment to be connected with ITER via VPN for remote participation, are reported.
Collaboration between REC and the ITER CODAC as a part of cooperation arrangement between BA activity and ITER project is starting in terms of remote participation. Our REC is expected to connect to XPOZ-RP segment in IO by a secure channel. A dedicated layer-2 VPN (L2VPN) with broad bandwidth between IO and REC was established in 2020. In the REC side of the L2VPN, a special isolated network segment, hereafter referred as REC-XPOZ, will be prepared in order to secure the communication between IO and REC.
A host running CODAC client applications on the CODAC Core System will be securely connected in the REC-XPOZ and CODAC server applications hosted in the XPOZ segment in ITER will be tested remotely. A server for live monitoring of the ITER experiment and of plant status has been also prepared and will be connected to the REC-XPOZ as well. Live streaming data without time-consuming disk I/O will be received and visualized on REC video-wall as well as the other operator interface terminals, to be looked up by remote participants at the REC.
In the REC, it is planned that all data generated in ITER will be replicated and stored into Rokkasho. A server with SSD for this fast data transfer by using MMCFTP has been prepared in the REC-XPOZ as the data receiver. Fast data transfer with 8 Gbps throughput between ITER and REC was already demonstrated in 2016. Further demonstration will be planned as the physical network between IO and REC is upgraded.
In order to promote research activities based on the ITER remote experiment, access to the replicated ITER DB in REC will be provided for domestic researchers in secure and efficient way with sufficient analysis computing resources. These data access from researchers to the replicated ITER DB has to be strictly separated from the REC-XPOZ for the security of the IO-REC L2VPN. Design study of the REC-SAN (storage area network) is ongoing considering this security point of view. A possible network structure including REC-XPOZ, REC-SAN and data analysis resources for domestic researchers of the REC based on data replication via L2VPN connection will be discussed.
The plasma control system of the Joint European Torus (JET) is distributed and heterogeneous. This modularity has advantages in separating concerns that span several engineering domains, but creates integration challenges. This paper examines these issues in relation to the JET RF real-time control system. It describes how the system software has evolved over decades to respond to project upgrades. These have varied in scale from embedded systems updates, through major RF plant changes and up to facility wide modifications such as the introduction of the ITER-like wall. We highlight lessons learned from having addressed these projects while maintaining reliable operations and conforming to ever stricter quality processes.
Currently, National Research Center "Kurchatov Institute" is working on the tokamak T-15 modernization. Plasma parameters (current, position, the shape of plasma cord, electron density, and energy content) on the T-15MD tokamak are controlled by an electromagnetic system, dynamic gas injection, and a complex of additional plasma heating systems (neutral beam injection, ion-cyclotron, low-hybrid resonance, and microwave).
The individual components of the Power Control System (PSCS) and Plasma Control System (PCS) are distributed over distances of up to 300 m, and their interaction must be coordinated and synchronized with an accuracy of tens of microseconds.
A key feature of the developed T-15MD PCS is its ability to rapidly design, test, and deploy real-time shot scenario algorithms with the distribution of computing power between subsystems.
The electromagnetic PCS architecture consists of two levels:
1. High application-specific level: model development and linear approximation, calculation of the experiment scenario, controllers design and experiment simulation (Matlab Simulink RT / Linux RT).
2. Process control level: real-time control of plasma parameters (National Instruments (NI) hardware running LabVIEW RT operating system and ported CS-PF regulator as dll from Simulink).
In the Hardware-in-the-Loop (HIL) simulation mode (Fig. 1) communication between the levels (1) and (2) is realized by the reflective memory (RFM) “star” topology network and the middleware S-function package within Simulink RT / Linux RT environment that performs the role of Middleware. The electromagnetic PCS Simulink model structure shows in Fig.2.
At the moment, the electromagnetic PCS shown in Fig. 3 is implemented. The total data transfer latency in the PCS control cycle does not exceed 1.1 ms, which fits into the required maximum latency of the 3.3 ms.
The proposed architecture will allow performing tests and configuration of the PCS before plasma shots, which increases the efficiency of the experiments while reducing costs. In the future, the plan is to use Simulink RT on PCS DAQ Server to perform real-time calculations of plasma equilibrium reconstruction in the magnetic control loop, implementing PCS data exchange in the RFM network. It is planned to develop infrastructure for simplified integration and testing of third-party control algorithms and plasma-physical codes. Plasma equilibrium reconstruction code deployment on a high-performance server integrated in real-time with the EMD and the PCS controllers in the operational configuration. Interoperability is provided by the Skiner PTP adapter (data transfer with the exact timestamp binding) and RFM. The achievements and advantages of PCS architecture:
1) Development of regulators, codes, and models in the Simulink and DINA environment:
- Currently, Linux OS used with function ported from Simulink;
2) Adding the HFC PS control system after the T-15MD tokamak physical start-up will be performed as a simple upgrade without changing the hardware architecture and software.
3) The use of RFM network and the implemented decomposition of the hardware kit allow to quickly and cost-effectively switch from operational configuration to HIL test-bed (Fig. 4) and expanding PCS functionality.
One of the key requirements for large tokamak operation is a reliable handling of off-normal states such as failures in some subsystems and their combination, not all necessarily known a priori. Handling these issues requires advanced and flexible algorithms for decision logic to ensure a reliable, while still scalable system.
Our work focuses on the tokamak ASDEX Upgrade. The ASDEX Upgrade control system DCS already applies diverse decision algorithms for achieving strategic, system-wide goals as well as for implementing defence in depth in individual control functions. Beyond a certain number of states and complexity the current algorithms become hardly maintainable and scalable. In this contribution, we therefore propose the use of Behaviour Trees (BT) as the backbone for the decision logic to cope with the complexity and the experimental character of the control system for tokamaks.
BT are widely and successfully established in robotics and the game industry for the design of complex behaviours in real time. They possess several advantages over traditional methods such as hierarchical finite state machines (HFSM). As BT essentially operate state-less, they avoid the need of defining consistent state transitions between the many nested and concurrent sub-states of a plasma control system. This characteristic endows the BT with a great flexibility, high modularity and ease to maintain and extend.
In our contribution, we will show the usage of the BT in two examples. Firstly, it demonstrates how a BT can be used to define the current experimental goal by selecting the corresponding segment in the ASDEX Upgrade pulse schedule. Secondly, it shows the real time selection of the most convenient diagnostic sources for the real time density evaluation in presence of multiple diagnostics failures and diverse plasma states appearing at AUG.
MT-1 a small spherical Tokamak in Pakistan, is the modified version of GLAST-II (Glass Spherical Tokamak) wherein glass vacuum vessel has been replaced by metallic vessel. Its major and minor radii are 15 and 9 cm respectively. Various coil systems for generation of toroidal , poloidal magnetic fields and toroidal electric field are installed. Diagnostic systems like Rogowski coils, magnetic probes, flux loops, Langmuir probes, and emission spectroscopy are also installed on the device. The generation of the plasma current in Tokamak is mainly dependent on impurity free environment in the chamber and optimized application of magnetic and electric fields. For conditioning of chamber first of all electric tape heating is employed then microwave heating and helium glow discharge are used. In order to monitor vacuum conditions, optical emission spectroscopy and RGA (residual gas analyzer) are used. During initial experimentation for generation of plasma current, it was found that in addition to other error vertical fields, a strong field generated because of eddy current flowing in the chamber is main problem for discharge initiation. One method to compensate this error vertical field is to apply an equal vertical field in reverse direction externally. Experiments were conducted for plasma current generation with vertical field produced by different combinations of vertical field coils installed symmetrically around chamber. This scheme did not work because the applied vertical field is suppressed because of strong coupling of central solenoid and vertical field coils. In order to apply vertical field independently at start of pulse, mutual inductance of two systems was measured and based on it; decoupling coils were designed and installed. All these efforts resulted in successful generation of plasma current. All signals during experiments are recorded using indigenously developed data acquisition system.
DCS (Discharge Control System) is the IPP C++ real-time framework for plasma control at ASDEX Upgrade. Since 2016, the 2011 version of DCS has been used routinely for WEST plasma control without any kind of major issue. However, some errors occur in the interfaces with the WEST CODAC Infrastructures. Although it is not a security issue (machine integrity and operator security are not at risk), the lack of reliability in the interfaces has had an important impact on operation time and machine availability. Moreover, technical collaboration was becoming hard because of the growing differences between codes.
After analysis, it appeared that the way DCS was originally adapted to fit the needs of WEST is too complex and different from the DCS way of working. Moreover, the specialized parts of the code which are exclusively used for WEST operation were not included in the evolution/maintenance process at IPP Garching and codes have inevitably diverged. To fix these problems, it was decided to review the specialization of DCS for WEST so that only standard DCS services (called “Application Processes”) are used. This way, it will be possible to use exactly the same version of DCS in both institutes and to specialize code only by parameters. Consequently, WEST will benefit immediately from all improvements made to DCS and WEST will be a practical test bench to further demonstrate DCS agility and reliability.
The paper describes the new architecture of the DCS WEST integration and, in the second part, the results we obtained. This paper will also stress on all the advantages of sharing code as well as a common practice of software development.
To support the CFETR PCS development and discharge scenario optimization, the PCS Simulation Verification Platform (PCS-VP) is designed and developed. The framework of PCS-VP is divided into three layers, as device layer, function layer, and presentation layer, which follows the layered and modular design principles. The device layer interacts with the operating system and provides hardware driver modules to support the hardware-in-the-loop simulation between the platform and the PCS or other device subsystems. The function layer provides interpreters and solvers for system simulation, as well as powerful model libraries and interfaces with third-party models. The presentation layer provides a visual modeling and simulation environment. The model library of PCS-VP provides great convenience for plasma system simulation modeling. It includes a mathematical library for mathematical modeling and basic calculations, a plasma simulation library customized with a variety of plasma controllers, actuators, and plasma response models, and some auxiliary modules such as signal publishing and subscription module, event injection and exception capture module. In addition, the PCS-VP supports customized modules. Users can write function modules in C language or construct models by Simulink. At present, the prototype of PCS-VP, including visual simulation environment and part of the model library, has been developed based on python. And the poloidal coil current control and power supply models of EAST were constructed using the platform, the closed-loop control test between these models had consistent results with that in MATLAB/Simulink, which verified the feasibility of the framework design.
Abstract EAST PCS, a linux cluster configured with real-time data acquisition and data transmission hardware devices, executes a series of algorithms for plasma parameters control in real time. The control performance and reliability of PCS determines the operation safety of the device and the achievements of physics experiment objectives. In order to test the performance of whole control system under real working conditions, the hardware-in-the-loop (HIL) simulation technology is applied for EAST PCS control simulation, which has been widely used in the aerospace, automotive industry and new energy fields. The essence of HIL simulation is to connect EAST PCS with the digital tokamak models through the configured data input/output devices, which simulates the real operation mode for the control system. This research uses the time synchronization method of aligning the hardware time with the real physical time to build the simulation framework. The HIL simulation framework is mainly divided into two parts. One is the upper computer that deploys real-time tasks, the other is the lower computer that executes real-time tasks. The work of the upper computer is mainly divided into three parts, using Matlab to develop a fixed-step physical model, using Labview to develop real-time drive services for the lower computer, and using VeriStand to compile, deploy and monitor real-time tasks. The main job of the lower computer is to run real-time tasks and exchange real-time data with EAST PCS through the reflective memory card. In order to verify the framework, two models, simple coil current model and rigid plasma model, were built for the EAST application. The fixed-step model running in the lower computer and the DMA transmission performance of the reflective memory card were tested. Both the transmission performance of the reflective memory card and the solution performance of the model met the requirements of the control cycle, less than 100us. The coil current, plasma current and position control had the consistent results with that in experiment or Matlab/Simulink model in loop test. The HIL simulation provides a powerful validation tool for the development of control functions in EAST PCS.
Keywords:Hardware-in-the-loop simulation, Control simulation, VeriStand, PCS
The Experimental Advanced Superconducting Tokamak (EAST) was built to demonstrate high-power, long-pulse operations under fusion-relevant conditions, which has ITER-like fully superconductive coils and water-cooled tungsten (W) mono-block structure. In order to construct distributed real-time subsystems for control and verify the performance of real-time framework (RTF) of ITER, the radiation control subsystem independent of the plasma control system (PCS) was designed and implemented based on the ITER RTF. To calculate the radiation power for PCS feedback control during discharge, this subsystem needs to communicate with the Central Control System for discharge information, acquire diagnostic signals to calculate the radiation power and exchange data with PCS during each control cycle, and store data to Mdsplus for further analysis. Besides, a friendly Graphics User Interface (GUI) is also necessary to set parameters. Corresponding to the above requirements, four RTF function blocks are designed, namely: communication function block, data acquisition function block, radiation calculation function block, and data storage function block. The communication function block can realize the slow data communication with Central Control System through Socket or fast data transmission with PCS through reflective memory network (RFM). The acquisition function block acquires 64 channels of absolute extreme ultraviolet (AXUV) signals synchronously in 20KHz using D-TACQ196 digitizer. The radiation calculation function block calculates the radiation power using AXUV signals and plasma boundary data which is read from PCS through RFM. All data generated by the acquisition and calculation function blocks are segmental saved in real time to Mdsplus tree in the data storage function block. The development of such subsystem in RTF architecture has been completed with the GUI written in Python. The benchmark test using history data was carried out, and the radiation calculation result is consistent with the historical data，which verified the effectiveness of each function blocks and the availability of hardware devices. In the 2021 EAST operation campaign, the subsystem will be applied for radiation power calculation, which is the first attempt of RTF on EAST.
TCV has a flexible, digital, distributed control system for testing experimental control algorithms, acquiring data from hundreds of diagnostic channels and controlling all magnetic, heating and fueling actuators. We present the state of the system, focusing on the latest upgrades, and the key control capabilities enabled by the system. The control algorithm code is developed and maintained in MATLAB/Simulink and run-time code is generated automatically using code generation. The previously used practice of just-in-time code generation and compilation before every shot has been abandoned in favor of a more reliable and efficient method where the run-time code is able to load parameters and waveforms from plant databases. The ability to simulate the control code is guaranteed by an object-oriented simulation framework in MATLAB/Simulink that reads parameters and waveforms from the same databases w.r.t. the real-time environment. This approach still allows very rapid development and deployment cycles with new algorithms deployed on TCV usually within a few days from the completion of their testing in simulation. The control algorithm software is managed through a DevOps methodology with extensive unit and regression tests as well as Continuous Integration / Deployment practices.
The real-time environment has been completely replaced by the F4E MARTe2 framework, greatly improving standardization, modularity, maintainability and extensibility. The intrinsic data-driven application runtime buildup of the MARTe2 framework has naturally yet rigorously allowed the integration of the inter-shot tunable parameters and waveforms in the control code. The framework has also greatly enhanced interfaces between the real-time computers and the rest of TCV IT infrastructure, notably with its databases for shot configuration and control data acquisition.
From the point of view of the hardware, the systems responsible for primary plasma controls (magnetic control and density control) have been upgraded with new ADC/DAC modules connected to two real time computers operable in parallel on the same discharge. This arrangement allows to use one control computer for the primary (released) main plasma controller while the second one can be used as a live test stand for plasma algorithms in state of testing or development. Also, a new EtherCAT real time industrial network has been laid down to operate distributed low I/O count subsystems boosting system flexibility at low additional cost and high speed of commissioning.
This overhauling process has already granted a number of experimental advances on the machine, the foremost ones being: SAMONE a comprehensive real-time plasma supervision, off-normal event handling and actuator management system, plasma event detectors based on neural networks, novel linear controllers for improved vertical control for the formation and stabilization of doublet. Finally a number of existing real-time codes have already been ported to this new approach allowing them to be run seamlessly on every TCV discharge in real-time; notably they comprise RT-LIUQE, the real-time magnetic equilibrium reconstruction of TCV, coupled with real-time transport calculations; RT-MHD, the comprehensive real-time MHD analysis algorithms set and real time divertor radiation front control with multispectral 2D imaging diagnostics (MANTIS). Other applications include runaway and profile control.
HL-2M is a medium-size tokamak constructed by Southwestern Institute of Physics (SWIP) in China. The first plasma has been successfully obtained in 2020 by using a plasma control system (PCS) based on Labview RT. In order to get better plasma control performance, a new PCS based on the software framework of DIII-D PCS has been proposed.
There are two concerns about realizing the PCS. First, real-time performance of PCS needs to be guaranteed. Second, many interfaces should be adopted for the HL-2M existing system. The proposed PCS is deployed on a Linux cluster with two servers, one is the non-real time server for message and waveform, the other is the real-time server with a D-TACQ196 DAQ card and two reflection memory(RFM) cards. The real-time operating system has been upgraded and optimized to improve real-time performance. Millions of testing results indicate that the jitter time of the system is less than $6\mu s$, which satisfies the system real-time requirement. In addition, EPICS has been introduced for message synchronization with Central Control System and other inherent systems. Meanwhile, RFM devices have been used to transfer real-time data in each control cycle. To extend the channel of data acquisition, D-TACQ2106 has been integrated into the new system. The initial system involves basic control algorithms (e.l., coil current control for CS and PF coils, density control and corresponding failure detection). The new PCS has been preliminary verified with HL-2M history data in simulation mode. It can output control command correctly in the integrated environment.
In the next phase, more control algorithms will be integrated and more tests will be carried out to verify system reliability.
JET experiments include feedback control implemented via the real-time central controller (RTCC). This system is highly data driven. Controllers can be adapted and tuned during experimental sessions by expert users. The original implementation has been expanded and updated many times. For future campaigns, further growth in terms of both capacity and capability is desirable, but this is not practical within existing constraints. A new system to provide improved functionality based on use of the MARTe2 framework has been designed and prototyped. We report on the project status, including a particular focus on the quality processes that have been used to minimise deployment risks in a mature environment at a critical time. We also outline the future roadmap for developing the application and supporting ecosystem which could have benefits in other contexts.
A set of MDSplus devices which abstract MARTe2 components and their communications has been developed. As these are applied to real applications, they are being refined and expanded. Generic versions of the Simulink and python GAMs have been developed obviating the need to create a new MDSplus device type for each simulink component or python routine. These provide a mechanism to quickly integrate new Simulink and python modules into control systems using the framework.
The framework has been used in the ITER Neutral Beam test facility for two applications. The first application provides the required management of acceleration grid breakdowns, consisting in switching off the grid power supplies and, after a given amount of time, driving again the power supply following a given waveform. In this case the system is driven by a 1kHz clock and receives an synchronous trigger whenever the breakdown occurs, using a DAC device to generate the required waveform in real-time. In the second application, a set of algorithms are implemented to derive online calorimetric measurements in the cooling system. The system, running at a rate of 10 Hz is driven by the reception of around 100 input signals communicated via MDSplus events and produces a similar number of output signals that are both stored in the MDSplus pulse file and again sent out via MDSplus events for online display.
At MIT a control system demonstration platform has been constructed to test and learn about the framework and real-time computing and networking platforms. This is a 10CM levitated magnet which can run the complete architecture of a distributed, multi-timescale control system, including supervisory control, alternate scenario (soft landing) and actuator sharing. This will be used both to validate the framework for use with the SPARC tokamak, and to drive the development of new features.
EAST experimental uses MDSplus database for various data storage.The user accesses the data through the API it provides.MDSPlus stores abundant experimental data, but lacks a resource directory, which makes it difficult for users to have a quick overview of experimental data.
This paper proposes a solution to the Metadata Database based on MongoDB.Firstly, the system uses C ++ language to scan the whole database and extract and integrate all the metadata.Then, based on the basic document in the BSON format, the system uses the form of document nested document array to gather all the metadata of each shot in the same document, and a single metadata is constructed using a stream document to establish metadata database.Next,encapsulates interfaces for typical queries and cross-shot statistics, and optimizes performance based on indexes;Finally, based on SpringBoot + MyBatis, metadata front-end display service is provided to provide users with navigation service of database resources.
Through the design and implementation of the meta-database, it can help users quickly understand the general situation of database resources, and realize the efficient and simplified access to experimental data.
The key to developing integrated modeling and data analysis tools is to realize data exchange between different data sources (experimental database, IO for simulation code, etc.). A data integration toolkit (SpDB) was developed to access different data sources using a global unified schema defined by the ITER Physical Data Model. Data consists of the data schema and data format. The definition of data schema will change frequently as requirements are updated. However, the format of the data source will always remain stable. In the implementation of SpDB, data format conversion and data schema mapping are separated. Therefore, it not only maintains a relatively stable API with the data source but also adapts to the frequent changes in the data model.
FAIR data, that is making data findable, accessible, interoperable and reusable, is becoming
increasingly adopted across a number of disciplines for many reasons. Within the fusion
community, at least with regards to experimental data, while each site has elements of the
FAIR principles, there is a lack infrastructure providing harmonised search and access
mechanism for data at multiple sites. The goal of the FAIR4Fusion project is to develop
demonstrators to realise the benefit of a community based FAIR approach to metadata
integration and design a blueprint architecture for a fully featured service meeting all of the
gathered user requirements, plus additional requirements based on the demonstrators and
extending to not only cover experimental data, but also modelling and simulation data.
In this talk, we introduce the FAIR concepts and how we anticipate this can be applied
across the community while still maintain each sites autonomy and existing infrastructures
and processes. We also show how this can be achieved, at least within the scope of this
project, by building upon existing work already performed by the community, reducing the
costs for implementation, and describe efforts to generalize this to improve both scalability
and performance. We also introduce the initial blueprint architecture and seek to elicit
input from the audience to provide additional requirements.
Currently, largely for historical reasons, almost all fusion experiments are using their own tools to manage and store measured and processed data as well as their own ontology. Thus, very similar functionalities (data storage, data access, data model documentation, cataloguing and browsing of metadata) are often provided differently depending on experiment. The overall objective of the Fair4Fusion project is to demonstrate the impact of making experimental data from fusion devices more easily findable and accessible. The main focus towards achieving this goal is to improve FAIRness of the fusion data to make scientific analysis interoperable across multiple fusion experiments. Fair4Fusion is proposing a blueprint report that aims for a long term architecture for Fusion Open Data Framework implementation.
User stories about searching and accessing data and metadata, and from the perspective of data providers were collected. These use cases present the different perspectives of members of the general public, EUROfusion researchers and data providers that are the main target users of the analyzed scenarios. The basic requirements and user stories have been transformed into a list of functionalities to be fulfilled. These functionalities have been grouped in several general categories: search, visualisation and accessing outputs, reports, user annotation, metadata management, subscriptions and notifications, versioning and provenance, authentication, authorization, accounting, licensing and are related with different FAIR aspects. The collection of requirements and functionalities has been used as a basis for the iterative process of architecture design. We are assuming the use of the ITER Integrated Modelling & Analysis Suite (IMAS) Data Dictionary as a standard ontology for making data and metadata interoperable across the various EU experiments. The resulting architecture of the system consists of 3 main building blocks, namely Metadata Ingests, Central Fair4Fusion Services and Search and Access Services. In the figure we present a simplified version of this high level architecture.
Metadata Ingests are the entry point to the system for the metadata produced by experiments. In the proposed design, Metadata Ingests stay within the administration of particular experiments, thus the experiments themselves can filter or amend data before they decide to expose it to the rest of the system. From Metadata Ingests the metadata is transferred to the next block of the system, i.e. Central Fair4Fusion Services. The Core Metadata Services, being the heart of this block and the entire system in general, operate on the IMAS data format, but thanks to the translation components can accept different formats of metadata as input. Central Fair4Fusion Services provide supplementary functionality for specification of data that is not strictly tied to experiments, such as user-level annotations or citations. The last main block of the system is a set of Search and Access Services. It contains all the user-oriented client tools that integrate with the Central Fair4Fusion Services. At this level of the system, key importance is given to the Web Portal that is expected to offer an extensive set of functionalities for searching, filtering or displaying metadata and data managed within the system.
Fusion-related experiments (WEST, MAST, JET, etc...) produce large amounts of data. In the future, we can expect ITER to produce an equally large amount of raw data coming from each and every shot.
Even though (fusion community as a whole) collects large sets of data, scientists suffer from a lack of an amalgamated shot catalogue and different access requirements. At the moment, finding data for a given shot requires having access to all sites (where experiments are run), being able to use different data formats (HDF5, MDSPlus, raw data), and having access to storage locations where databases are kept. It is also not possible to search for given physical characteristics of data as there is no single, unified format of storage or ontology.
Catalogue QT 2 and the Fair4Fusion Dashboard aims to solve some of these issues. By utilizing the IMAS data format and storing a reduced description of experiments' results (metainformation is stored inside the so-called Summary IDS), we provide scientists with a convenient way of browsing, searching, and (in the future) obtaining experimental data. By combining information from various sources (MAST, WEST, etc...) and different acquisition techniques, UDA (Universal Data Access), MDSPlus files, text based file formats, etc..., we are able to provide users with a consistent view and search functionality of data coming from different sources. By developing and combining loosely coupled components, based on Web Services, we can present data to users not only via a dedicated user interface (Web Application created with ReactJS) but also via command line tools and Jupyter Notebook based scripts. Openly available APIs allow connections to third party applications as long as they belong to the same Federated Authentication and Authorisation Infrastructure. Thanks to moving authorization and authentication responsibilities to external Identity Providers, we were able to move user management out of the scope of the application itself. This way, we can provide multiple, independent installations of Catalogue QT 2 and benefit from the common Authentication and Authorisation Infrastructure infrastructure. Installation of Catalogue QT 2 is possible in virtually any environment - due to the fact that we provide both bare metal-based installations and Docker-based components.
This paper presents the architecture of a proposed solution, ways of combining different, loosely-coupled components coming from different projects and possible future directions of Catalogue QT 2’s evolution.
Keywords: IMAS, MDSPlus, data acquisition, fusion experiments, data analysis, web services, Docker, Authentication and Authorisation Infrastructure
Despite continued efforts to engineer a standardized infrastructure, through regular hardware and software upgrades and refactoring, the WEST CODAC still includes several subsystems based on Motorola PowerPC VME boards running LynxOS V3.1. There lies a major challenge for the years to come in maintaining and operating more than 10 such entities some of which, such as the poloidal field control and monitoring system (DGENE) are critical for Tokamak operation and plasma control.
Regarding software developments and deployments to these targets, a 30-year-old VME native compiler running on LynxOS was used until very recently. The latter was never upgraded to avoid instabilities and system incompatibilities on critical equipment. To suppress the looming risk of hardware malfunction, a complete replacement toolchain was designed. The proposed cross-compiler is based on QEMU to provide a virtualized emulated PowerPC environment. It also uses Debian 7 « Wheezy »which is a 32-bit Linux distribution providing PowerPC support. The gcc 2.94 compiler, last version to support LynxOS V3.1, was customized within this virtual PowerPC environment in order to cross-compile functional binaries targeting Motorola PowerPC VME boards.
The custom toolchain was qualified during the WEST C4 experimental campaign following a request to modify the code of DGENE. Subsequently, the toolchain was included within the WEST framework allowing for automatic deployments using the WEST continuous integration workflow and automatic software quality control for all legacy VME subsystems, with a clear impact on reliability and maintainability of some of the oldest systems on WEST.
MITICA is one of the two ongoing experiments at the ITER Neutral Beam Test Facility (NBTF) located in Padova (Italy). MITICA aims to develop the full-size neutral beam injector of ITER and, as such, its Control and Data Acquisition System will adhere to ITER CODAC directives. In particular, its timing system will be based on the IEEE1588 PTPv2 protocol and will use the ITER Time Communication Network (TCN).
Following the ITER device catalog, the National Instruments PXI-6683(H) PTP timing modules will be used to generate triggers and clocks that are synchronized with a PTP grandmaster clock. Data acquisition techniques, such as lazy triggers , will be also used to implement also event-driven data acquisition without the need of any hardware link in addition to the Ethernet connections used to transfer data and timing synchronization.
In order to evaluate the accuracy over time that can be achieved with different network topologies and configurations, a test system has been set-up consisting of a grand master clock, two PXI-6683(H) devices and two PTP aware network switches. In particular, the impact on accuracy due to the transparent and boundary clocks configurations has been investigated. In addition, a detailed simulation of the network and the involved devices has been performed using the OMNET++ discrete event simulator. The simulation parameters include not only the network and switches configuration, but also the PID parameters used in the clock servo controllers. A comparison between simulated and measured statistics is reported, together with a discussion on the possible optimal configuration strategies.
The model of Russian Remote Participation Center (RPC) was created under the contract between Russian Federation Domestic Agency (RF DA) and ROSATOM as the prototype of full-scale Remote Participation Center for ITER experiments and for coordination activities in the field of Russian thermonuclear research.
In the presented report, the data transfer processes (latency, speed, stability, single and multi-stream etc.) and security issues within 2 separate L3 connection to IO over public internet exchange point and GIANT were investigated. In addition, we have tested various ITER tools for direct remote participations, such as screen sharing, data browsing etc. at the distance from RF RPC to ITER IO (about 3000 kilometers).
Experiments have shown that the most stable and flexible option for live data demonstration and creation of a control room effect is the EPICS gateway. Together with the ITER dashboard, the usage of these tools makes it possible to emulate almost any functional part of the MCR at the side of a remote participant. This approach allows us to create our own mimics and customize the CSS studio HMIs for ourselves. Today using these tools, we can integrate various systems remotely without any major restrictions.
For data mirroring tasks UDA server replication is an option. It may improve performance for usage of the data browsing tools and some other tasks with archive data. To obtain the best performance it is very important to find multithreading (multi streams) data replication solution between UDA servers.
Network setup connection strategy still under development with IO now.
Work done under contract Н.4а.241.19.18.1027 with ROSATOM and Task Agreement C45TD15FR with ITER Organization
We introduce a novel client-server Web platform for data access, processing, analysis, and visualization. The platform was designed to simultaneously meet a set of capabilities not available in a joint way in similar software systems. The platform: (a) provides secure access to large amounts of data hosted in institutional data servers; (b) allows users to operate in any modern device (computers, tablets, even smartphones); (c) is intuitive to use and provides on-line help, making it easy to use, even for sophisticated data analysis; (d) runs in a distributed environment, profiting from remote hosting and computing power; and (e) allows integration of heterogeneous data and user provided data analysis codes in different programming languages. These requirements were inspired in the needs of users for the analysis of the massive databases of nuclear fusion devices, in particular, the ITER database.
The client runs in any HTML enabled device, under virtually any operating system, and allows the user to interactively specify a desired data flow through a series of analysis or visualization routines. This flow takes the form of a graph of nodes (or modules), each one consisting of an icon with customizable properties. Each icon encapsulates a given data processing algorithm and the whole set of icons form a ready to use library of standard data handling, analysis, and visualization routines. Users conducting the analysis use their field of expertise to design the desired combination of routines, appropriately customize their properties and, optionally, add new icons with their own code. Each icon can encapsulate routines written in one of several accepted programming languages, including Fortran, C, Matlab, Python and R.
For the execution of graphs, there is a specific server that receives a request for traversing the specified graph. Then, each graph node is executed in sequence and connects its output with the input of the next ones (the output can be used as input to several modules), as indicated by the graph. The system transparently takes care of accessing data in remote servers, checking access permissions, transferring data if necessary, running code written in different programming languages in possibly different computing facilities (potentially in parallel ways), passing data to and from the inputs and outputs of the nodes, and creating the required final output data or visualization plots. The result, also in HTML form, is returned to the client for user inspection. It should be noted that the platform has been prepared not only for the interactive execution of codes but also for batch processing.
The presentation describes the platform, its architectural design and the implementation decisions. We provide an example of use that implements the development of an adaptive disruption predictor from scratch.
The DIII-D National Fusion Facility is a large international user facility with over 800 active participants. Experiments are routinely conducted throughout the year with the control room being the focus of activity. Although experiments on DIII-D have involved remote participation for decades, and even have been led by remote scientists, the physical control room always remained filled with ~40 scientists and engineers all working in close coordination. The severe limitations on control room occupancy required in response to the COVID-19 pandemic drastically reduced the number of physical occupants in the control room to the point where DIII-D operations would not have been possible without a significantly enhanced remote participation capability. Leveraging experienced gained from General Atomics operating EAST remotely from San Diego , the DIII-D Team was able to deploy a variety of novel computer software solutions that allowed the information that is typically displayed on large control room displays to be available to remote participants. New audio/video solutions were implemented to mimic the dynamic and ad-hoc scientific conversation that are critical in successfully operating an experimental campaign on DIII-D. Secure methodologies were put into place that allowed control of hardware to be accomplished by remote participants including DIII-D’s digital plasma control software (PCS). Enhanced software monitoring of critical infrastructure allowed the DIII-D Team to be rapidly alerted to issues that might affect operations. Existing tools were expanded and their functionality increased to satisfy new requirements imposed by the pandemic. Finally, given the mechanical and electrical complexity involved in the operation of DIII-D, no amount of software could replace the need for “hands on hardware.” A dedicated subset of the DIII-D team remained on site and closely coordinated their work with remote team members which was enhanced through extensions to the wireless network and the use of tablet computers for audio/video/screen sharing. Taken all together, the DIII-D Team has been able to conduct very successful experimental campaigns in 2020 and 2021. This presentation will review the novel computer science solutions that allowed remote operations, examine the efficiency gains and losses, and examine lessons learned informing what changes implemented as a result of the pandemic, should remain in-place post-pandemic.
 D.P. Schissel, et al., Nucl. Fusion 57 (2017) 056032.
*This work was supported by the US Department of Energy under DE-FC02-04ER54698.
The operation of ITER is expected to happen not only directly in the ITER control room, but also to benefit from human capital from around the globe. Each ITER participating country could create a remote participation room and follow the progress of experiments in relatively real time. Scientists from all over the world can collaborate on experiments at the same time as they are performed. This is called “remote participation” in ITER.
ITER control system is based on EPICS. It is thus natural to try to extend EPICS use to remote participation sites. The authors designed tests to find out how EPICS performance depends on network performance, with the goal of understanding if an EPICS-based application can be used directly on the remote side. A special test suite has been developed to see how many process variables (PVs) remote participants can use if they run their local operator screens or independent applications. Remote participants in the test were connected via a dedicated VPN channel. The test exercised reading of large number of PVs – up to 10 000 – with an update frequency of up to 10 Hz. The performance was compared with equivalent execution in a local network.
With a large number of PVs and their frequent updating, the latency of updates on the side of the remote participant, adjusted to the static network delay due to distance, was demonstrated to be comparable to the latency of local execution. This suggests that EPICS over long distance is quite usable for the purpose of ITER remotes participation tasks.
ITER is now in full construction phase, with many plant systems being installed and commissioned. Plant service systems – electricity, liquid and gas supplies, water cooling, building monitoring – are being gradually commissioned and handed over for operation. Other systems, such as plasma diagnostics, are being developed and tested on site or at the ITER parties, to be installed later during machine assembly. Remote participation function of the ITER control system has been always oriented towards plasma operation phase, not specifically addressing systems’ commissioning phase. As the systems are often procured by the ITER parties, recently there has been significant interest from the suppliers to follow up the commissioning activities as well. This interest was further multiplied by recent limitations on work force travel. Consequently, some remote participation elements under development had to be put in place or adapted ahead of time.
This contribution gives a summary of the status of the remote participation design at ITER, and illustrates several particular use cases. From architecture point of view, implications of remote participation on a control system network and services design are discussed, and different ways of remote connection are explored. From the point of view of plant systems, a remote follow up of a “slow” electricity supply system producing repetitive data readings is illustrated, as well as a follow up of a “fast” diagnostic system producing scientific data in test mode. From the operational mode point of view, mostly systems with read-only follow up are discussed, but also the approach to interactive participation in system tuning is explained.
With the outbreak of the Covid-19 pandemic in spring 2020, experimental operations at ASDEX Upgrade came to a temporary halt, as for health reasons and due to regulations, many of the people necessary for operations and the scientific programme could not enter the institute premises, and only a very small number of people were allowed to enter the control room itself.
However, thanks to the use and optimisation of various existing and new tools, experimental operations at ASDEX Upgrade could be resumed after a short time with almost no restrictions. The programme planned for 2020, both by internal proponents and with the participation of partners from EUROfusion and other international associations, was almost fully completed. Likewise, the planning and implementation of the current 2021 campaign could continue as usual.
This contribution presents the tools provided by CODAC to enable and facilitate the smooth running of ASDEX Upgrade with a minimum number of people present on site, especially with regard to the planning, preparation and execution of the experiments as well as their evaluation and the scientific review of the results. Further possibilities for improvement are also discussed.
Several of the tools applied not only enable experiments to be conducted with remote participation, but also have the capacity to promote the efficiency of experiment operation. It is therefore likely that they will remain in use after the end of the pandemic. Furthermore, the experience gained here can contribute to the discussion of what efficient remote participation might look like in other experiments such as JT-60SA or ITER.
During March 2020 it became obvious that the Covid-19 infection rates were accelerating in the UK and that we would be heading for a National lockdown. It was decided to put JET into a safe state. The site was then shut down, all but for a skeleton staff, there to ensure essential safety and security, with everyone else working from home. Over the next couple of months arrangements were made to bring maintenance teams back on site to ensure the integrity of the JET plant so that it could be re-started when conditions permitted. Plans were prepared to limit operational staff in the JET Control Room and surrounding areas to allow a return to work while ensuring Covid-19 distancing. A major refurbishment of the Control Room HVAC system had already been planned but pending completion of this the maximum number of people in the area was limited to 10, all required to wear face coverings. In order to reduce the number of people in the Control Room from the usual 20 – 30, workstations had to be re-located to a meeting room in the same building involving considerable re-cabling and extending the JET operational networks beyond their usual areas. In addition, arrangements were made for many roles to be executed from offices or even off-site using our existing remote access system. The number of video conference channels (Zoom rooms) dedicated to operations was increased from 1 to 3 and then 4 to enable communications between the Control Room staff and remote operators. This was supplemented with MS Teams for more ad-hoc communications. Our operations and plant mimics, an in-house development based on Oracle/Solaris, were web enabled and made accessible from the office network and our real time plasma operations camera system was augmented to provide web based streaming video (inc. the live Torus Hall audio). Work is also ongoing to convert many of our paper-based forms for approval of operational exceptions work in controlled plant areas to integrated computer-based workflows. All these measures have proved to be very successful and enabled us to restart JET operations. In several cases this has made operations more effective as it gives remote experts easier access to operational information, and control room staff easier access to remote experts. Initially operations commenced in Deuterium (2H) plasmas and have now moved on to 100% Tritium plasmas. We are now preparing to an increased return to site together with allowing increased numbers of people back into the Control Room following completion of the HVAC refurbishment, ready for DT plasmas later in the year.
The ASDEX Upgrade diagnostics have provided scientists with the experimental data required to advance the fusion field for 30 years. In this time, the systems and diagnostics of the machine have evolved. Many solutions combined commercial products with in-house productions of state-of-the-art data-acquisition hardware. However, with the ever-increasing advances of the computing industry,and the long-run of the fusion machines it is not uncommon to find dated systems working with more modern systems. At some point a line has to be drawn and systems updated to support newer architectures, which provide access to more modern tools. Nevertheless, simply re-writing otherwise functional programs is not always feasible or effective. For the ASDEX Upgrade diagnostics the time has come to undo this entanglement and draw a clear strategy for modern data acquisition systems. The Discharge Control System (DCS) team at ASDEX Upgrade has already advanced on this work by providing clear development and integration pipelines to some of the ASDEX Upgrade diagnostics, but to supply current diagnostics with modern DAQ systems (in the order of 100’s) more robust tools were required. We introduce new data acquisition plan using standardization layers based on the ITER Nominal Device Support (NDS v3). These frameworks are often used to plan highly modular and maintainable systems looking at the future, but in this case, these same traits help modularize and integrate existing systems. New diagnostics access modernized systems integrated using NDSv3, meanwhile old diagnostics, that may be replaced in a more staggered manner, benefit from the new systems adopting a simple communication layer or a C++ wrapper on the otherwise perfectly functional C driver of the old diagnostic. The new communication interfaces are standard and re-used by any NDS driver, saving time and also allowing for future developments of drivers to connect both to the real-time or standard ASDEX Upgrade diagnostics networks. The work presents the status as well as plans for the fully deployed new diagnostics, together with the development, test, and deployment strategies, bringing ASDEX Upgrade diagnostics to the most modern standards. A prototype system was built with the mentioned technologies and using CentOS and preliminary conclusions are presented.
The SPIDER experiment (Source for the Production of Ions of Deuterium Extracted from a Radio frequency plasma) is a prototype devoted to the heating and diagnostic neutral beam studies in operation at the ITER Neutral Beam Test Facility (NBTF) at Consorzio RFX, Padova. SPIDER is the full-size ITER ion source prototype and the largest negative ion source in operation in the world. In view of ITER heating requirements to realize plasma burning conditions and instabilities control, SPIDER aims at achieving long-time operation (3600 s) with beam energy up to 100 keV, high extracted current density (above 355 A.m-2 for H- and above 285 A.m-2 for D-) at maximum beam source pressure of 0.3 Pa. Moreover, the maximum deviation from uniformity must be kept under 10%.
The SPIDER pulse preparation follows a strict procedure of approval of the operation parameters, in view of safety, machine protection and efficiency. In a simplified description, the session leader (SL) defines the parameters according with the best implementation of the science program. The technical responsible (RT) verifies all parameters for approval and only after agreeing with the setup, sends the configuration to the technical operator (OT) to load the configuration into SPIDER instrumentation.
The current tools used in SPIDER integrated commissioning and initial SPIDER campaign  permit the SL to design a new pulse and program the set of parameters directly into a temporary MDSplus pulse file. This information is then passed to the RT for approval. The same pulse file can be visualised by the RT but there is no indication of what parameters were changed since the previous pulse, or from a pre-set pulse file taken from a previous run. In consequence, the RT must go through a tedious and error prone procedure of checking all the parameters, even if the set of parameters have already been approved for a previous pulse. Moreover, the current set of tools does not foresee an automated load of previously set configurations, except for the possibility of using a command line to load previous setups from an executed or reference pulse.
Aiming at the automated implementation of rules and procedures, as well as improving the usability, safety and interoperability between the SL and RT, a new configuration tool for setting the SPIDER pulse parameters is under development, using the integration of two relevant tools in the fusion I&C community: MDSplus and Epics.
This contribution will emphasize on (i) the present solution that has been used during initial SPIDER campaigns ; (ii) the requirements of the configuration tool according to the set of procedures to be implemented in the SPIDER pulse preparation; (iii) the set of development tools available for implementing the necessary application(s); (iv) the design options and application architecture; (v) the implementation details and preliminary tests of the alpha release application.
 V. Toigo et al 2019 Nucl. Fusion 59 086058
 A. Luchetta et al 2019 Fusion Engineering and Design, Volume 146, Part A, 500-504
Tokamak scenarios are governed by actuator actions that can be either pre-programmed in feedforward or requested by feedback controllers based on actual plasma state. Actions requested by feedback controller have the advantage that they can react on unpredictable events that happen in the system. On the other hand, the reaction comes always with a delay. For that reason, it is required to prepare the feedforward trajectories such that they bring the system as close as possible to the desired state and feedback controllers provide correction of disturbances.
In tokamak research, the feedforward trajectories are typically found by trial and error approach, which is not very effective in terms of convergence and can lead to violation of operational and actuator limits. This approach is not applicable to future devices such as ITER or DEMO. In our contribution, we propose to use Iterative Learning Control (ILC) , which is a common technique for optimization of repetitive processes in control engineering world and demonstrate its use on optimization of central ion (Ti) and central electron temperature (Te) using NBI and central ECRH as actuators on ASDEX Upgrade.
The ILC is based on a linearized actuator response model along the initial system trajectory, which was in our case obtained from RAPTOR  simulation matched to existing experimental data. After the tokamak discharge is executed, the quantities of interest are evaluated and the new actuator trajectories are computed to minimize error between the desired and actual behavior while avoiding operational limits, and penalizing too large deviations from the initial actuator trajectory as well as the trajectory in previous trial. This method is also applicable to quantities that can not be measured in real time (Ti in our case).
We have successfully applied this method on optimization of central Ti,e as well as the Te/Ti ratio at constant WMHD. In the first case, we increased Ti while keeping Te constant and in the second case we have ramped the ratio of Te/Ti while keeping WMHD constant. These quantities can be quickly evaluated after every discharge using IDA  and IDI  integrated analysis. We propose a method for effective bringing of the improved actuator trajectories to the pulse schedule. We also summarize our experience and give several recommendations for future usage of ILC: the method can be effectively used only with good shot-to-shot reproducibility, which can be largely improved for example by an effective actuator management.
 F Felici, T Oomen, 2015 54th IEEE Conference on Decision and Control (CDC), 5370-5377
 F. Felici et al 2011 Nucl. Fusion 51 083052
 R. Fischer et al., Integrated Data Analysis of Profile Diagnostics at ASDEX Upgrade, Fusion Sci. Technol. 58, 675 (2010)
 R. Fischer et al., Estimation and Uncertainties of Profiles and Equilibria for Fusion Modeling Codes, Fusion Sci. Technol. 76, 879–893 (2020)
After the 2020 campaign, EAST (Experimental Advanced Superconducting Tokamak) facility carried out maintenance. EAST data acquisition system（EAST DAQ）, which is responsible for the unified diagnostic data acquisition and long-term data storage, also has been upgraded. This article will present an overview of the upgrade data acquisition system. About 20 diagnostic systems and some other systems realize the pulse data acquisition through EAST DAQ. EAST DAQ consists of a DAQ console, DAQ nodes, and data storage server cluster. The DAQ console is used to manage the data acquisition configuration information and control the data acquisition workflow. The old DAQ console has been developed many years, and with increasing of data acquisition nodes, it has become inadequate. After upgrade, the new DAQ console provides a data acquisition management website based on Spring Boot frame for administrator and diagnostic users to manage the DAQ nodes, data storage servers, and signal conditioning equipment, while the data acquisition workflow control part runs backstage. Up to 65 DAQ nodes with about 3500 channels are distributed in different physical positions for different diagnostic systems. The raw diagnostic data acquired by DAQ nodes are quasi-real-time transferred to the data server cluster. The data server cluster will save these data with MDSPlus for long-term storage. The upgraded data acquisition system will be used in the 2021 EAST campaign.
The driving, implosion and combustion processes of laser fusion take place in a very short time and a very small space scale, resulting in transient extremely high temperature and high density plasma environment. Plasma diagnosis is an important means to observe the physical phenomena in laser fusion process and obtain the parameters of extreme physical states. The object of laser inertial confinement fusion diagnosis includes a wide range of radiation from infrared, visible light, ultraviolet, to X-ray region, as well as high-energy particles such as hot electrons, neutrons, protons and gamma rays. The purpose of diagnostic measurement is to obtain the temporal and spatial behavior of physical processes, as well as the flux and spectral information.
In the large laser facility, we have developed a variety of diagnostic methods, including optics, X-ray and particle, and dozens of diagnostic systems with different principles and structures. A large number of different kinds of diagnostic systems and instruments should be used in the single shot experiment to measure and record the plasma time, space, spectrum and flux information timely and accurately. With the improvement of the scale of the device and the development of the diagnosis technology, it is necessary to equip a special integrated management and control system integrating the experimental preparation, safe operation, automatic data acquisition and computer real-time processing of each diagnosis system, so as to realize the stable, safe and reliable measurement of each diagnosis system and improve the operation efficiency of the experiment.
Aiming at the characteristics of many kinds of diagnostic systems, different principles, less matching number of the same diagnostic system, frequent arrangement changes and single shot measurement in laser fusion experiment, a design scheme of process driven diagnostic integrated management and control system based on physical experiment process is developed. The system adopts the micro service architecture to ensure the reliability and scalability of the system at the software level, greatly improves the operation efficiency and reliability of the whole experiment through the experimental task management and responsive process control, and provides the integrated display of real-time data and status for centralized monitoring during the operation of the experimental process. Aiming at the operation process of the process control node, such as parameter configuration of the diagnostic measurement system, aiming of the diagnostic system, and spatial interference of multiple diagnostic systems, which depend on the operator's experience and manual interpretation, a diagnostic system based on artificial intelligence is developed. The preliminary exploration of intelligent control method has been used, such as intelligent setting of oscilloscope range based on model training and deep learning technology, automatic aiming of image recognition, and spatial interference warning technology.
With the development and experimental application of integrated management and control system, intelligent operation control technology of diagnosis system based on artificial intelligence will play an increasingly important role in operation management and health management of diagnosis system.
Software integration of multiple data acquisition and timing hardware devices in Instrumentation and Control and diagnostics applications for fusion environments is very challenging. This is especially relevant for ITER, where the instrumentation is mostly composed of COTS hardware. While the implementation should manage multiple hardware devices from different manufacturers providing different applications program interfaces (APIs), scientists want to use the implementation in different environments such as EPICS, the ITER Real-Time Framework or the MARTe2 middleware as seamlessly as possible.
The Nominal Device Support (NDS) C++ framework under development at ITER for use in its diagnostic applications uses two layers: The NDS-Core layer provides the infrastructure to develop the interfaces with the specific hardware device APIs. Above, the interface layer abstracts and standardizes the specific low-level interfaces of NDS device drivers (developed with NDS-Core) for use with control systems (e.g., EPICS) or real-time applications.
ITER CODAC and its partners have developed NDS device drivers using both PXIe and MTCA platforms for multifunction DAQ devices, timing cards and FPGA-based solutions that are part of the ITER fast controller hardware catalogue. Additional NDS device drivers support communication and archiving through ITER’s high performance networks as well as access to EPICS based systems using the pvAccess protocol.
To support the integration of complex devices and simplify design and maintenance, the NDS approach has been extended with the concept of NDS-Systems. An NDS-System encapsulates a complex structure of multiple NDS device drivers. It implements system-level functions combining functions of the different low-level devices, thereby reducing the number of process variables exposed to the user. It also collects all system-specific logic, keeping it out of the device driver code. The NDS-System implementation provides multiple C++ helpers to aid in driver configuration: High-level methods solve common tasks (data acquisition, time stamping, signal generation, use of digital I/O, etc.) and allow communication on ITER specific and EPICS networks.
In recent years, a new tomographic inversion method, based on the Maximum Likelihood (ML) approach, has been adapted to JET bolometry. In addition to its accuracy and reliability, the key advantage of this approach consists of its ability to provide reliable estimates of the uncertainties in the reconstructions. The original algorithm has been implemented and validated using the MATLAB software tool. This work presents the development aimed at implementing an accelerated version of the algorithm using an ITER fast controller platform. The algorithm has been implemented in C++ using the open-source libraries: arrayfire, armadillo, alglib and matio. The use of these libraries simplifies the management of specific hardware accelerators such as GPUs and increases performance. The final work will present the methodology followed, the results obtained, and the advantages and drawbacks of the implementation with an ITER fast controller platform and the ITER CODAC Core System software distribution.
The operation of fusion devices produces huge amounts of data with high dimensionality that allows developing sophisticated machine learning models to tackle specific problems. In such high-dimensional data space, the feature selection plays an important role to extract useful information.
A high number of features in the data requires dimensionality-reduction techniques before the successful application of data-driven methods. Usually, feature selection techniques are used to select the number of input variables and to identify irrelevant and redundant attributes from data. The right choice of inputs variables is an essential issue before developing a model. It helps in a double sense. On the one hand, it reduces the computational cost of modeling. On the other hand, it improves its performance, efficiency and understanding.
In this paper, a new automatic method to extract the main features in a very high dimensional input space is proposed. Our method is based on correlation measures. It allows reducing the number of features, finding out the most relevant ones. We have simulated series data with 10000 points. The points correspond to random samples of different Gaussian distributions. A 30-dimensional space is simulated whose first 10 components are 10 time series N(µ,σ) with a range of values of µ from [-1000, 1000] and a range of values of σ from [1, 1000], the second 10 components are linear combinations of the previous 10 and the last 10 components are non-linear combinations of the first 10 ones. In this way, signals of 10000 samples within a feature space of dimension 30 are generated. A total number of 500 resamples of these signals have been created to test the method.
The objective of this work is develop a method that sorts, in an automatic and unsupervised way, the most relevant features from the original set of features. To this end, the method computes the correlations among the 30 components of the signals in order to sort them from the less correlated features to the most correlated ones. Once the features are ordered according to their correlations, it is possible to filter out the most correlated dimensions while keeping just a few of the less correlated features.
Control, Data Acquisition and Communication (CODAC) real-time software are essential for fusion device operation, machine protection and for the optimization of plasma experiments. In 2019, following the WEST project (W -for tungsten- Environment Steady-state Tokamak) upgrade, a major migration of the inter-process message transport infrastructure has been initiated.
Originally, on Tore Supra, proprietary RTWorks™ middleware was used for all communications between the various sub-systems in the real-time acquisition network. With an increasing number of processes and a growing traffic, a number of malfunctions during WEST operation, especially in the streaming of real-time data from measurement instruments, were observed possibly due to an apparent overload of the low-level infrastructure. Given the high maintenance costs of RTWorks™, the little monitoring functionalities it offers to investigate the observed errors, a decision was made to migrate to a less expensive solution with a higher quality of service.
Keywords: WEST; Tore Supra; CODAC; RTWorks; MOM; MQTT; LynxOS; Linux; Windows; Legacy Message Transport
The paper describes all the steps concerning the migration of the inter-process messaging middleware to an open-source replacement: the selection criterion, the qualification tests, the integration into the WEST CODAC framework, the progressive release, and performance checks at the limits.
The message oriented middleware MQtt was partially deployed on WEST during the C5 experimental campaign. The results clearly demonstrate enhanced performances and maintainability compared to the former message transport infrastructure.
An update is being conducted on the TCABR tokamak, which is a small tokamak (R0 = 0.62 m and a = 0.2 m) operated at the University of São Paulo, Brazil. This update mainly consists of the installation of (i) graphite tiles to fully cover the inner surface of the vacuum vessel wall, (ii) new poloidal field (PF) coils to allow the generation of various diversion configurations, such as single- null, double-null, snowflake and x-point target divertors, (iii) HFS in the vessel and non-axisymmetric control coils for ELM suppression studies and (iv) a coaxial helical injection system to improve plasma initialization. Among other objectives, this update will allow studies of the impact of the RMP fields in advanced diversion configurations, such as the x-point target and the snowflake diversion. The creation of the various plasma scenarios foreseen for the TCABR will require a new robust and flexible plasma control system, improvements in the data acquisition and data analysis system, and in the supervisors who monitor the various systems involved in the operation of the tokamak. In this way, several studies are being conducted in the implementation of the EPICS (Multithreaded Application Real-Time executor) , MARTe (Experimental Physics and Industrial Control System ) systems and improvements in the MDSplus (Model Driven System Plus) system already widely used in the TCABR tokamak. In this work, we will present the studies of this new implantation.
With the development of EAST physics experiment, more and more diagnostic signals need to be acquired in real-time for advanced plasma control. Due to the distributed characteristics of diagnostic systems, different types of data acquisition device with suitable sampling rate need to be deployed nearby the diagnostic signals. In order to obtain these distributed signals effectively and provide a transparent access to multi-type data sources for the control algorithms in the plasma control system(PCS), a deployment specification for standard acquisition cabinet and the transparent access middle layer with device virtualization technology are designed and implemented. Each standard acquisition cabinet is configured with signal conditioning device, Ethernet network, external clock and trigger device, data acquisition server and low latency digital transmission network, etc. The refletive memory(RFM) high-speed network is chosen to realize data communication between distributed data acquisition cabinet and PCS. For the transparent access middle layer, a mapping file for the diagnostic signal names and the channels of specific data acquisition device is defined, as well as a set of data transmission specifications between the PCS and distributed data acquisition cabinets. The mapping relationship between signal names and channels allows real-time control algorithms to obtain data through signal names without caring about the specific data source. The design of the RFM message header implements the data transmission specifications, it specifies the available memory range for distributed data acquisition. Before transmitting the effective data, PCS will define the RFM message header information according to the mapping file designed in transparent access middle layer, the header information includes the name of the data acquisition device, the offset address and the number of signal channels. After receiving this message, data acquisition cabinets can write data to the memory space according to the address and the number of channels. This transparent access method provides a specification for flexibly expand multiple types of data acquisition devices. Using this method, a new data acquisition device DTACQ2106 has been successfully added, which has higher acquisition performance and low latency, which plays a great role in fast control.
The SPIDER experiment is the first of two experiments being held at the ITER Neutral Beam Test Facility in Padova (Italy). SPIDER has been operating since 2018, initially with pulse duration of a few seconds and currently with pulses lasting up to 3000s. SPIDER CODAS uses the MDSplus data acquisition and management system that is well suited to stream data acquisition thanks to the powerful concept of ‘Data Segments’. Currently SPIDER data acquisition involves 768 signals continuously acquired via PXI ADCs with sampling rates ranging from 100 Hz to 100 kHz, 8634 signals derived from EPICS Process variables (PVs) and stored in MDSplus pulse files with sampling rates ranging between 0.1 and 10 Hz, 10 signals acquired at high speed (up to 250 MHz) upon event occurrence and the frames from 24 camera devices with frame rates ranging from 0.1 to 10 Hz. The pulse database is organized into 24 different databases logically linked to form a unique pulse file and all hosted by a single data server.
The paper reports the CODAS experience gained after three years of operation. In particular, Data Storage and Data Access adopted strategies will be discussed, that proved to be of high impact in overall system performance and maintainability.
Regarding Data Storage strategy, a tradeoff must be defined between the continuous and event driven data acquisition. Continuous data acquisition, i.e. sampling data at a constant frequency, represents the normal operation in short experiments, but can easily lead to an unmanageable amount of data for long lasting experiments. In any case, for a large set of signals, such as those derived from PVs that are acquired at slow rate, it makes no sense to complicate design in order to save a negligible amount of space in the pulse database. On the other side, Data acquisition at a varying rate, that is increased upon the occurrence of given events leading to an improved signal dynamics, is required for a subset of signals that describe physical phenomena with fast dynamics. Several strategies have been adopted in SPIDER to handle varying rate data acquisition and are discussed here.
Considering data access strategy, an important Use Case, especially when the pulse duration is long, is the concurrent access to the pulse file for online analysis and visualization. Concurrent data read and write is supported by MDSplus but performance can be affected by the required locks in file access. For this reason it is important to limit as far as possible useless data access. This has been achieved in different ways, such as setting a Region of Interest (ROI) in data access and by the extended usage of on-the-fly resampling in conjunction with the availability in the pulse file of different versions of the same data item, acquired at different sampling speeds. In addition, an extensive usage of low-speed data streaming decoupled from data access proved useful to cover a variety of use cases for data visualization such as wall display of important waveforms.
The RedPitaya board represents an alternative to many expensive laboratory measurement and control systems. It hosts a Zynq system composed of an ARM processor deeply integrated into a configurable FPGA, two 125MHz RF inputs and outputs and 14-bit Analog to Digital and Digital to Analog converters.
Due to its flexibility, RedPitaya has been considered for a variety of advanced diagnostic measurements at SPIDER, one of the two experiments being held at the ITER Neutral Beam Test Facility located in Padova (Italy). In particular, high-speed, event-driven data acquisition, i.e. data acquisition during a time window centered on the occurrence time of a given event, possibly repeated during the experiment, represents a common use case in data acquisition at SPIDER. Event driven data acquisition was carried out by a much more expensive commercial device and the RedPitaya solution has been considered not only for its price, but also because not all the requirements could be satisfied with the former solution. For this reason, a project was started, aiming at developing a flexible FPGA configuration capable to satisfy all the requirement, in particular the required flexibility in event definition. Events triggering acquisition can indeed be represented by external triggers, but can also be derived from input signal characteristics such as level and steepness. Moreover, external triggers can be either directly provided or derived from the Manchester encoding of real-time events in the 10 MHz timing highway signal, a signal generated by the central timing system and distributed to all systems to provide in phase clock and asynchronous triggers.
Red pitaya event driven data acquisition has also been used to provide streamed spectral measurement of RF sources. In this case, the boards receive a sequence of triggers (up to 200 Hz) and at the occurrence of every trigger it acquires a bunch of samples at high frequency (up to 125 MHz) that is then streamed to the network via the embedded Zynq CPU. FFT analysis is then performed either inline or offline to derive spectral information at every trigger time, thus implementing continuous spectral information, a measurement would otherwise have required very expensive instrumentation otherwise.
Thanks to this flexibility, RedPitaya – based data acquisition is becoming more and more used at SPIDER, and the developed solutions, including flexible DAC devices, shall also be used at RFX-mod2, the upgrade of the RFX-mod fusion experiment currently under construction in the same laboratory.
In this paper, we propose recurrent transformer model, which is a method for forecasting the temperature of KSTAR PF coils, to protect PF coil while PF coil operation. In this work, we developed the transformer model that can recurrently forecast output using the current time input data and the hidden state of the previous time step.The amount of computation and memory overheads of running the recurrent transformer model is lower than standard transformer model. because the model computes forecasting output using only one input data without computing the entire sequence of time windows.The recurrent perceiver model has been trained using the KSTAR PF coil temperature dataset acquired from PF coil monitoring system.The performance of the proposed recurrent transformer model was compared with LSTM and standard transformer model in terms of r2 score, mean absolute error(MAE), root mean squared error(RMSE), mean absolute percentage error(MAPE), and quantile loss.The experiment results shows that the error of the proposed recurrent transformer model is significantly lower than those of other deep learning methods.
In the LHD experiment, physical data is managed by the Kaiseki Server, or the Analyzed Data Server. In the past, the registration of physical data was done by each researcher in charge of measurement, but since many analysis programs require other physical data as input data, calculations could not be performed until the necessary data were registered. Therefore, the registration of the physical data was sometimes delayed.
To facilitate the registration of physical data after the experiment, automatic Integrated Data Analysis (aIDA) was developed. This system enabled data analysis without delay by starting an analysis program that uses a certain physical data as input data when it is registered. As a result, aIDA now have registered about 8 million pieces of data, or 45% of the total 18 million pieces of data
On the other hand, with the increase in the number of analysis programs managed by aIDA, the following problems have arisen: 1) each program uses a different version of the library and data analysis tools, and it is difficult to provide the run-time environment for each program 2) temporary files created during program execution remain, and waste disk spaces.
In order to solve these problems, we decided to run each program in a container using Docker. Each container can have a different execution environment, such as libraries, and analysis tools. In addition, the files created during execution are deleted at the end of execution, so there is no increase in the number of files.
Furthermore, by running in a container, the execution environment can be easily moved to other PC, and when a large amount of computation is required for batch processing, the processing speed can be easily improved by increasing the number of PCs used to run the computation. Container technology is also useful for transplanting the technology developed on the LHD to other experimental devices, and we are currently working on containerization of the analysis server. In this presentation, the current status of LHD analysis tools using container technology will be presented.
In order to generate high-performance plasma, it is desirable to keep high quality vacuum during experiment. Mass spectrometer is commonly used to monitor the vacuum quality and to record the amount of atoms and molecules in the vacuum vessel. Leak is the most serious accident to avoid and must be indicated by the recorded events such as an increase in the degree of vacuum and a change in the composition ratio of the particle types in the vacuum vessel. Therefore, we study an effective way to identify leaks in the vacuum vessel by analyzing mass spectrometer data. Our results indicate that clustering the composition ratio is useful. Fig. 1 shows the data of the mass spectrometer at 8:00 AM during a certain experimental period on the Q-shu university experiment with steady-state spherical tokamak (QUEST). Fig. 1 (a) shows the degree of vacuum in the vacuum vessel, and Fig. 1 (b) shows the signal strength of each mass number. The QUEST device uses turbo molecular pumps and cryopumps for vacuum pumping, but when there were no experiments, the cryopumps are regenerated. Only the turbo molecular pump was working. As the pumping capability of the cryopump for gas species is different from that of the turbo molecular pump for gas, the trend of both the degree of vacuum and the signal strength of the mass number was modified. The air leak started on the 10th day. It can be seen that the signal intensities of mass number of 18 (m18) and m24 are inverted after the leak took place. These two signals are mainly recognized as water and nitrogen molecule or carbon monoxide, respectively. Since this was an air leak, m28 after the leak is mainly derived from nitrogen molecules. Fig. 2 shows the results of cluster classification using the data of each mass number normalized by degree of vacuum as input data. Hierarchical clustering is adopted in this cluster classification 1, and the Euclidean metric and Ward's method 2 are used. The dendrogram shows that the data for the first 9 days without leaks form one cluster and the remaining data form another cluster. It can be properly classified according to the change in the composition ratio of the components.
We are developing a method that identifies the leaks using this classification method, which uses the data of the mass spectrometer as the main input data. This system collects information in real time from other devices, such as the status of the cryopump and the status of the plasma discharge sequence. Then, by adding this information as input data, we aim to perform more accurate classification.
![Dendrogram of mass spectrometer data]2 Rokach, Lior, and Oded Maimon. "Clustering methods." Data mining and knowledge discovery handbook. Springer US, 2005. 321-352.
2 Ward, Joe H. (1963). "Hierarchical Grouping to Optimize an Objective Function". Journal of the American Statistical Association. 58 (301): 236–244.
Data-driven models are aimed on finding complex relation among different quantities but without taking into account the physical mechanisms that are responsible for an underlying behaviour. From an engineering point of view, this is not an issue whenever the detection is correct. However, such models require large databases for training purposes. The requirement of a high number of observations to build these models is an important problem in the view of ITER and the next generation tokamak DEMO. For instance, ITER and DEMO cannot wait for thousands of discharges (i.e., after a whole campaign) to have a reliable disruption predictor system. Therefore, one important challenge is to develop data-driven models from scratch.
Developing pattern classifiers under data-scarce condition (from scratch) is related to the study of training data-driven models with unbalanced data. In the unbalanced data classification problem, the number of observations is very different between the classes. This issue can be solved considering some techniques described in the literature by increasing the number of samples of the minority class (such as SMOTE), but unfortunately such techniques increase the number of false positives rapidly, which dissuades their use contemplating the ITER requirements in terms of false alarm rates. That is why we need to study new approaches to build models from scratch. In this work we propose to build data-driven models with scarcity nuclear fusion databases by using reinforcement learning.
Reinforcement learning (RL) is an approach through which intelligent systems, called agents, are trained to constantly adapt to the environment. This training is done by given positive (negative) feedback in the case that performance of the agent is correct (incorrect). Unlike the traditional machine learning algorithms, RL training is based on regards or punishments to learn best actions to take. Such interaction does not happen with the training data, but with the environment through the optimization of an objective function (reward function).
In our approach, the RL training of the model considers a cost function that takes into account the correct classification (i.e., hits for positive and negative samples) as positive feedback. On the contrary, the pattern classification system (the agent), will receive a negative feedback (punishment) when it outputs misclassification. In order to validate the model, we have selected the image dataset from Thomson Scattering (TS) diagnostic of the TJ-II stellarator. The TS diagnostic provides temperature and density profiles by using five different classes of images, which capture spectra of laser light scattered by plasma under different conditions. The results show that RL is simple way to build pattern classifiers with scarce data in nuclear fusion.
Acquiring new data in nuclear fusion devices is expensive. For many reasons the accessing to new discharges is an important issue, particularly because every shot is a physical experiment that requires a considerable number of resources (economics, humans, materials, instrumentations, and time among others). That is why, it would be interesting to study the possibility of building probabilistic models that learn to generate new fusion data, considering existing experimental data and the relationships between the signals to be generated.
In this work, we propose to build a data generator of nuclear fusion databases by the application of generative deep learning models. A generative model describes how datasets is generated, in terms of a probabilistic model. By sampling from such model, it is able to generate new and realistic data.
It is important to notice that the generative models are probabilistic instead of deterministic. If the model simply computes the average of existing data, it should not be considered generative because the model outputs always the same data. Thus, the model should include stochastic or random mechanisms to generate new samples. This leads that building or training a generative model aims to learn (mimic) the unknown probabilistic distribution that explains every observation of the dataset. After the model is trained, it can be used to generate new examples that follows the underlying distribution, but are suitably different from the existing observations of the dataset.
A key idea behind generative modelling is the use of representation learning. This approach aims to describe each observation of the existing data set in a low-dimensional space (called latent space) instead of trying to model the high-dimensional sample space directly. After that, the next step is to learn a function that maps a point from the latent space to the original domain. Both actions, the representation and the mapping function, can be done successfully by using deep learning models.
A variational autoencoders (VAE) is one of the most fundamental and well-known deep learning architectures for generative modelling. An autoencoder is a network trained to compress (encode) data to a latent space, and with the ability to perform a reconstruction (decode) the original input from such low-dimensional domain. In theory, any point at the latent space could be used to generate new data.
The article describes a preliminary study of deep learning generative models to generate new samples from an existing dataset. Particularly, a variational autoencoders has been tested. In order to validate the generative model, we have selected the image dataset from Thomson Scattering (TS) diagnostic of the TJ-II stellarator. The TS diagnostic
provides temperature and density profiles by using five different classes of images, which capture spectra of laser light scattered by plasma under different conditions. The use of VAE models could be in theory extended to generate other kind of nuclear fusion data such as waveforms.