Speaker
Description
High Repetition Rate High Energy Density (HED) physics facilities are rapidly becoming a cornerstone for the development of next-generation compute, control, and optimization infrastructures required by emerging Inertial Fusion Energy (IFE) platforms. As the demand for more sophisticated and responsive experimental setups grows, the ability to efficiently process and analyze vast amounts of diagnostic data generated at high repetition rates is paramount. Establishing robust and scalable workflows that can ingest high-throughput, per-shot diagnostic data, perform edge analytics, and seamlessly integrate with high-performance computing (HPC) systems is now recognized as a foundational requirement for the advancement of IFE research.
The deployment of automated workflows across the entire compute continuum—from the initial data acquisition at experimental facilities to remote HPC resources—presents a complex challenge. It requires not only technical coordination and orchestration but also the development of interoperable systems capable of bridging diverse hardware and software environments. The goal is to enable real-time feedback, optimization, and control, thereby reducing the need for manual intervention and accelerating the pace of experimental innovation.
Conducting Experimental campaigns at the Extreme Light Infrastructure (ELI), we successfully developed and demonstrated an end-to-end, closed-loop experimental workflow. This workflow enabled the principle of concept remote control and optimization of Laser Wakefield Acceleration (LWFA) generated X-rays by manipulating plasma characteristics (i.e. density profile) directly from remote HPC platforms. The system was designed so that human intervention was only necessary for the final validation of machine learning-generated experimental parameters at the laser facility, prior to their application to the experiment. This significant reduction in manual oversight not only streamlined operations but also showcased the potential for autonomous experimental control.
The trans-Atlantic control system that underpinned this workflow was built upon stitching together technologies from high-throughput edge compute infrastructure, cloud-based data communication and HPC-based workflow tools. First, a containerized EPICS-based diagnostics control and data acquisition framework ensured reliable and modular management of experimental hardware. Second, a time-synchronized data archival mechanism was implemented to guarantee the integrity and traceability of all acquired data. Third, an event-driven data processing and filtration pipeline was deployed at the edge, enabling rapid analysis and selection of relevant data for further processing. Fourth, a secure, end-to-end encrypted data communication was facilitated via a cloud-hosted data exchange platform, while ensuring both the privacy and reliability of data transfers across continents. Finally, a modular machine learning pipeline was established leveraging HPC workflow practices both for training and inference to optimize experimental parameters.
This work demonstrates how experimental requirements, facility constraints related to data privacy and accessibility, and user operability shaped our workflow design, revealing both areas for improvement and potential pitfalls to avoid. Our approach represents a significant step forward, laying the foundation for future cross-facility, data-driven optimization of experiment-integrated scientific workflows.
| Country or International Organisation | United States of America |
|---|---|
| Affiliation | Lawrence Livermore National Laboratory |
| Speaker's email address | sarkar6@llnl.gov |