Since 18 of December 2019 conferences.iaea.org uses Nucleus credentials. Visit our help pages for information on how to Register and Sign-in using Nucleus.

9–12 Sept 2025
Fudan University, Shanghai, China
Europe/Vienna timezone
The programme will be announced soon

Integrated Intelligent Control Framework for Plasma in HL-3 Tokamak

10 Sept 2025, 10:45
30m
Auditorium Hall HGX 102 (Guanghua Twin Tower) (Fudan University, Shanghai, China)

Auditorium Hall HGX 102 (Guanghua Twin Tower)

Fudan University, Shanghai, China

220 Handan Road, Yangpu District, Shanghai, China 邯郸路 220 号 复旦大学
Oral (Invited) Data Analysis for Feedback Control Data Analysis for Feedback Control

Speaker

Rongpeng Li (Zhejiang university)

Description

In magnetic confinement fusion, precise control of plasma dynamics and shape is essential for stable operation. We present two complementary developments toward real‑time, intelligent control on the HL‑3 tokamak. First, we build a high‑fidelity, fully data‑driven dynamics model to accelerate reinforcement learning (RL)–based trajectory control. By addressing compounding errors inherent to autoregressive simulation, our model achieves accurate long‑term predictions of plasma current and last closed flux surface. Coupled with the EFITNN surrogate for magnetic equilibrium reconstruction, the RL agent learns within minutes to issue magnetic coil commands at $1$ kHz, sustaining a $400$ ms control horizon with engineering‑level waveform tracking. The agent also demonstrates zero‑shot adaptation to new triangularity targets, confirming the robustness of the learned dynamics. The deployment of PID and RL control systems on HL-3. The historical interactions between the PID and the HL-3 tokamak produce the dataset for learning the data-driven dynamics model, which serves as an environment for RL training.  Target tracking control results of RL

Second, we develop a non‑magnetic, vision based method for real‑time plasma shape detection. We adapt the Swin Transformer into a Poolformer Swin Transformer (PST) that interprets CCD camera images to infer six shape parameters under visual interference, without manual labeling. Through multi‑task learning and knowledge distillation, PST estimates the radial and vertical positions ($R$ and $Z$) with mean average errors below $1.1$ cm and $1.8$ cm, respectively, in under $2$  ms per frame—an $80$ percent speed gain over the smallest standard Swin model. Deployed via TensorRT, PST enables a $500 $ms stable PID feedback loop based on image‑computed horizontal displacement.Illustration of the shared base DNN of PST.Results of deploying the PST model online and implementing PID feedback controlA detailed comparison between the control segment of the PST model and other computational methods.

Together, these two streams lay the groundwork for a fully closed‑loop, vision‑informed RL control system. Although each module has been tested on its own, the next step is to link real‑time shape feedback with the RL‑trained coil actuator policy to enable continuous, model‑based control with minimal reliance on magnetic probes.

Speaker's email address lirongpeng@zju.edu.cn
Speaker's Affiliation Zhejiang university
Member State or International Organizations China

Authors

Rongpeng Li (Zhejiang university) Prof. Zhifeng Zhao (Zhejiang Lab) Ms Niannian Wu (Zhejiang university) Ms Qianyun Dong (Zhejiang university)

Presentation materials

There are no materials yet.