Since 18 of December 2019 conferences.iaea.org uses Nucleus credentials. Visit our help pages for information on how to Register and Sign-in using Nucleus.

Workshop on Digital Engineering for Fusion Energy Research

Europe/Vienna
Cambridge, Massachusetts, USA

Cambridge, Massachusetts, USA

Hacker Reactor at MIT’s iHQ. Address: 292 Main Street | MIT Bldg. E38 | Floor 7 |Cambridge, MA 02142
Description

KEY DEADLINES

30 September 2025 Deadline for submission of abstracts through IAEA-INDICO for contributed talks

21 November 2025 Deadline for submission of application for participation via the InTouch+ platform

7 November 2025 Notification of acceptance of abstracts and of assigned awards


Over the past decade, digital engineering has transformed the way complex systems are designed, analysed, and optimized, enabling more efficient and predictive approaches across various industries, including aerospace, automotive, and healthcare. In fusion energy research, the integration of simulation and data into technologies such as digital twins is emerging as a powerful tool to enhance the design, operation, and performance of fusion devices. By creating high-fidelity, real-time virtual replicas of physical systems, digital twins enable predictive maintenance, advanced control strategies, and accelerated innovation. These technologies are playing an increasingly vital role in fusion research and development, supporting experimental design, facility optimization, and operational planning. This workshop aims to bring together experts, researchers, and industry professionals to explore the latest advancements, share knowledge, and address key challenges in applying digital engineering to fusion energy. The insights and outcomes of this workshop will contribute to ongoing international efforts to accelerate fusion development through advanced computational methodologies.

Objectives

The purpose of the event is to bring together experts, researchers, and industry professionals to foster knowledge sharing and address emerging challenges in digital engineering applied to fusion energy research.

Target Audience

The event aims to bring together a multi-stakeholder and inter-disciplinary audience of researchers, developers, practitioners, and entrepreneurs in digital engineering, fusion energy science research and development, to discuss applications, connect and build collaboration. 

 

 

    • Welcome and Intro
    • Simulation and Data Integration
      • 1
        Digital twin workflow development for SPARC

        Commonwealth Fusion Systems (CFS) is currently constructing the SPARC tokamak - a high field (12T) machine designed to achieve an energy gain of Q~11 when running in H-mode with DT fuel. Many SPARC components have already been manufactured, assembly and commissioning have begun, and preparation for operations in 2027 is underway. At each phase in the design-build-operations cycle, CFS employs state-of-the-art digital engineering toolchains to balance competing constraints from physics, engineering, schedule, and budget. This presentation outlines some of the digital twin workflows deployed across the SPARC lifecycle.

        During the design phase, Digital Twin Prototypes were leveraged to rapidly optimize designs for performance, manufacturability, and cost. These digital prototypes circumvent the need for expensive and time consuming physical prototype manufacturing by enabling designers to directly calculate the sensitivity of a design to the physics uncertainties and engineering tolerance stackups. Now in the assembly phase, high fidelity metrology data (ranging from microns to mm) is ingested into the digital twin for reverse engineering. Additionally, by coupling metrology data directly to the physics modules, the impact of installation misalignments and as built geometry on physics operations can be quantified.

        As CFS prepares for operations, each candidate plasma scenario can be tested in an integrated physics/engineering toolchain to predict plasma and engineering component state. The high fidelity offline toolchain is also used to benchmark the plasma control system and tune calibration factors. In the control room, digital twin workflows are being developed to identify discrepancies between prediction and experiment, and to reconstruct the machine state for lifetime health monitoring. Specific examples will be provided that illustrate how these integrated data techniques accelerate the path to commercial fusion energy.

        Speaker: Tom Looby (Commonwealth Fusion Systems)
      • 2
        A Multi-Physics Digital Twin for the Integrated Design and Optimization of an Inertial Fusion Energy Power Plant

        The design of a commercially viable inertial fusion energy (IFE) power plant presents a formidable optimization challenge, balancing near-term technological capabilities, scientific uncertainties and the final reactor scale performance. Addressing this requires an integrated digital engineering approach. Focused Energy (FE) is developing a comprehensive digital twin (DTw) of an IFE power plant, designed to act as a high-fidelity virtual replica that captures the intricate interplay between all major systems. The DTw serves not only to guide design and R&D priorities but also to accelerate the entire development lifecycle through advanced simulation and data-driven analysis.

        Our digital twin is designed as a modular, multi-physics framework that connects detailed simulations of critical subsystems. Key modules include models for laser-plasma interactions, beam smoothing, and the reactor systems’ design and operation. These are coupled with explorations of target design, compression hydrodynamics, and optimizations for beam port geometry. The model extends to target manufacturing and delivery, as well as high-fidelity simulations of target injection and survival. Finally, the DTw integrates the balance-of-plant systems through simulations of neutron transport, tritium fuel cycle and dynamic chamber chemistry.

        Tying these disparate physics and engineering domains together is a sophisticated computational backbone centered on machine learning and advanced optimization. FE leverages probabilistic numerics and robust uncertainty quantification to navigate the vast design space. Furthermore, techniques such as causal inference are being explored to de-risk R&D decisions, while machine learning accelerators are being developed to reduce the computational cost of high-fidelity simulations.

        In this work, FE will detail the architecture of our integrated digital twin. We will discuss how this framework is used to inform key decisions in our pre-conceptual power plant design. Finally, we will present our efforts to leverage machine learning not just as an accelerator, but as a tool to create novel, performant and cost-effective designs for IFE.

        Speaker: Dr Valeria Ospina Bohorquez (Focused Energy)
      • 3
        PathSim: An Open-Source Python Framework for Digital Twin Applications in Fusion Fuel-Cycle Modeling

        We present PathSim, an open-source Python framework for modular, event-driven system modeling with applications to digital twin development in fusion energy research. PathSim enables researchers to construct complex, time-dependent models through a block-based architecture that supports dynamic system topology, hierarchical modeling, and seamless integration with existing scientific computing tools.

        The framework addresses key challenges in fusion fuel-cycle modeling by providing: (1) a modular component library for tritium breeding, extraction, processing, and storage systems; (2) flexible event detection and handling for operational mode analysis; (3) efficient coupling of multi-physics simulations through Python APIs; and (4) parallel execution capabilities for Monte Carlo analysis and parameter optimization.

        PathSim's capabilities have been validated against experimental data from MIT PSFC's BABY tritium release experiment. A 30-component tritium bubbler model achieves simulation times of ~1.5 seconds for complete transient analysis, enabling rapid iteration and uncertainty quantification workflows.

        PathSim's architecture facilitates co-simulation by wrapping external tools (FEniCS, FESTIM, and other Python-based solvers) as reusable blocks within unified system models. This approach enables researchers to combine domain-specific expertise across disciplines without monolithic software dependencies.

        We are developing pathsim-fusion, an open-source toolbox of physics-informed components for the fusion community, with ongoing collaborations to expand capabilities in RF systems and plasma control.

        This work demonstrates how modern software engineering practices and open-source collaboration can accelerate fusion research through flexible, accessible simulation tools that bridge experimental data, multi-physics modeling, and operational planning.

        Speaker: Milan Rother
    • 10:35
      Coffee Break
    • Simulation and Modelling Techniques
      • 4
        An Integrated Multi-physics Platform for the LIBRTI Facility

        The UKAEA-led Lithium Breeding Tritium Innovation programme (LIBRTI) aims to de-risk fusion fuel-cycle technologies through the delivery of a test facility for component-scale modular tritium breeding experiments. Underpinning this effort is the delivery of a flexible and scalable digital platform, which will produce high-fidelity digital replicas of LIBRTI’s breeding experiments, and provide validation of its underlying multi-physics models. Not only may such a platform enhance scientific insights, but it should provide greater confidence for the application of in-silico design methodologies to breeder blankets. Such approaches can enable more rapid design-cycles, as well as identify innovative configurations, potentially reducing cost and accelerating the critical pathway to delivery of fusion power-plants.

        In this contribution we present the development of the integrated multi-physics simulation framework that will provide the core modelling capability of the aforementioned LIBRTI digital platform. Building upon existing software available in the MOOSE ecosystem (including Cardinal and TMAP8) for neutronics, fluid dynamics, heat transfer and tritium transport, complementary functionality will be provided for activation and charged particle transport through wrappings of FISPACT-II and FLUKA respectively. In parallel, modelling of molten salt, solid and liquid metal experimental breeder concepts are used to demonstrate current capabilities. We report on the status of these activities, and describe our outlook for model validation, uncertainty quantification, and automation.

        Speaker: Helen Brooks (UKAEA)
      • 5
        Multi-physics tritium transport modelling of the ARC breeding blanket with FESTIM

        Complex multi-physics simulations are required to evaluate tritium transport in fusion breeding blanket concepts, since only such approaches can capture the coupled neutronics, thermo-fluids, and tritium transport phenomena and provide the quantitative results needed by system designers (eg. Tritium inventory, residence time, throughput…). An integrated digital workflow has been developed for simulating tritium transport in the ARC reactor breeding blanket. Neutronics analyses were performed using OpenMC, and thermo-fluid simulations were conducted with OpenFOAM. To enable coupled tritium transport studies, two new open-source tools, openmc2dolfinx and openfoam2dolfinx, were created to convert simulation results into dolfinx functions, a format directly usable by the tritium transport code FESTIM. This allows different physics domains to be brought together and enables the inclusion of effects such as turbulent diffusion in full three-dimensional blanket geometries.
        Application of this workflow to a simplified representative ARC blanket segment provides new insights into tritium behaviour. The total tritium inventory was estimated at 0.39 g, with transient simulations indicating a characteristic time of 1 h to reach steady state. The analyses can help identify stagnation zones in the coolant flow that act as local accumulation sites for tritium. In addition, the tritium leaving the blanket outlet was quantified, with a flux of 1.4 mg/s and a concentration of 3 × 10⁻⁴ mol/m³. Such information is of direct relevance to system designers, providing boundary conditions and input data for connected fuel cycle components such as tritium extraction systems for which tritium concentration is paramount.
        This work demonstrates how digital engineering approaches can accelerate the evaluation of blanket concepts by combining state-of-the-art simulation tools into an extensible multi-physics workflow, while also highlighting the value of using dedicated, domain-specific codes for their respective physics within a common framework. Work is currently underway at MIT to validate this workflow using data from smaller experimental platforms.

        Speaker: Dr James Dark (Plasma Science and Fusion Center - MIT)
      • 6
        Overview of the capabilities in the Multiphysics Object Oriented Simulation Environment and recent activities in modeling and simulation for fusion energy systems

        The Multiphysics Object Oriented Simulated Environment (MOOSE) is being developed by United States National Laboratories and partner institutions around the world to support multi-fidelity multi-physics modeling and simulation of advanced nuclear systems. Among its many physics modules, it offers computational fluid dynamics, thermal hydraulics, solid mechanics (including contact), and electromagnetics capabilities. These are further extended by applications leveraging MOOSE’s finite element and finite volume capabilities, such as the Tritium Migration Analysis Program version 8 (TMAP8) code for tritium transport, Cardinal for high-fidelity Multiphysics, and the Software for Advanced Large-scale Analysis of MAgnetic confinement for Numerical Design, Engineering & Research (SALAMANDER) code for scrape-off layer kinetic plasma analysis and blanket modeling. The tools cover a wide range of applications, are leveraged by academic and private actors, and are supported by an international community of institutions and developers. Further, MOOSE and many MOOSE-based applications follow the Nuclear Quality Assurance, Level 1 (NQA-1) software quality assurance standard, enabling trusted code and model development suitable for licensing of device designs.
        In this presentation, we will introduce and highlight features developed in MOOSE for multiphysics coupling over the last few years, summarize the open-source applications available online for fusion energy modeling, and present on projects and collaborations of interest to the community, notably the newly developed capability to create Functional Mockup Units (FMU) using MOOSE. Please note that projects pertaining to modeling divertor components using MOOSE and Salamander will be the focus of a sister presentation.

        Speaker: Guillaume Giudicelli (Idaho National Laboratory)
    • Lunch
    • Simulation and Modelling Techniques
      • 7
        Overview of the PSFC blanket and fuel cycle modelling activities

        Achieving reliable tritium self-sufficiency remains one of the defining challenges for fusion power-plant design, making accurate tritium fuel cycle modelling essential.
        At the MIT Plasma Science and Fusion Center, we are developing a unified digital framework that connects material-scale physics, component-level behaviour, and system-level fuel-cycle performance, informed and validated by experimental platforms.
        At the material scale, we combine thermo-desorption analysis (TDS), parametric optimisation for experimental validation (NRA, permeation experiments...), supported by the development of an open-source database of tritium transport properties (HTM).
        We also investigate fuel cycle component performances by leveraging multi-physics workflows (OpenFOAM, OpenMC, FESTIM): tritium transport dynamics in the ARC breeding blanket, extraction efficiency of a PAV extractor, tritium contamination in a heat exchanger...
        Finally, at the system level, we integrate these component models within a system modelling code (PathSim and it's graphical interface PathView) to analyse complete systems, from lab-scale experiments like LIBRA/BABY to full power-plant concepts like ARC.
        This multiscale, multiphysics modelling strategy highlights how digital engineering can accelerate design, improve predictive capability, and support the development of tritium-robust fusion reactors.

        Speaker: Remi Delaporte-Mathurin (Plasma Science and Fusion Center, MIT)
      • 8
        Towards a Tritium Breeder Digital Twin

        Currently, the LIBRTI experimental facility is being built in the UK to serve as a test environment for tritium breeder blanket systems of various types and to accelerate technology development for fusion reactors, such as STEP. In support of the LIBRTI programme, we have developed a multi-physics simulation platform for generic breeder systems, initially focussed on liquid lithium technologies. A key objective of the work is the ability to optimise breeder system designs with many free parameters (related e.g. to geometry, chemical compositions or operational conditions), and in the presence of various competing requirements. These requirements include e.g. tritium generation and heat extraction targets, corrosion resistance and space restrictions. Ultimately, this platform is planned to serve as a digital twin supporting operation, maintenance and decommissioning of the breeder system. The implemented simulations include neutronics calculations based on Monte Carlo methods coupled with nuclear activation calculations as well as analytical models. Machine Learning based surrogate models have been adopted for the neutronics/activation calculations. Trained with simulated data, these Machine Learning models are integrated into the simulation of the entire liquid lithium system and enable the multi-targeted optimisation of the system. In addition, we are exploring Machine Learning models in support of the simulation of corrosion effects.

        The Machine Learning models used predict, along with the response values, also the uncertainties on these values. This enables the targeted provision of additional training data to reduce uncertainties in regions of the parameter space where it is of most interest.

        The presentation outlines the physics simulation, discusses the applications of Machine Learning and provides an overview of the software infrastructure developed.

        Speaker: Albrecht Kyrieleis (Amentum)
      • 9
        ARC Divertor and Heat Exchanger Thermal Hydraulic Modeling using the Nek5000 CFD Code

        As the efforts to develop a fusion pilot plant progresses, there is a significant need for open-source computational fluid dynamics tools for studying component design of Fusion Energy Systems. Plasma Facing Components (PFCs) such as the divertor monoblock experience significant impinging heat fluxes on the order of 1-10 MW/m2 and neutron heating. These high heat fluxes are often exposed to one side of PFCs such as the divertor monoblock, creating an uneven heating profile, while the neutron heating occurs throughout the divertor monoblock. Adequately cooling PFCs, reducing thermal stresses, and remaining within material limits is an active design challenge in fusion energy systems. One way to increase the thermal performance of PFCs is through the use of passive heat transfer enhancements (HTEs). Adding HTEs such as twisted-tape inserts to the divertor or first wall coolant channels can lead to greater heat transfer capabilities. This increase in heat transfer is accompanied by an increase in frictional pressure losses, which need to be accounted for in finding optimal use-cases for HTEs.
        To address these modeling gaps, the authors have investigated ARC component behavior using Nek5000/NekRS, The U.S. Department of Energy’s open-source high-fidelity computational fluid dynamics code developed by Argonne National Laboratory. The investigated cases in ARC’s divertor coolant lines and heat exchanger tubes were done with Nek’s Large Eddy Simulations capabilities. The cases were done for a range of Prandtl numbers at a nominal expected Reynolds number with uniform and one-side heat fluxes. This has enabled the understanding of the heat flux boundary condition from ideal to realistic on heat transfer coefficients with and without twisted tape-inserts. This work enables a road map towards usage of high-fidelity open-source tools for fusion engineering applications for varying multi-physics objectives.

        Speaker: Lane Carasik (Virginia Commonwealth University)
    • 15:20
      Coffee Break
    • Simulation and Modelling Techniques
      • 10
        First-Principles–Based Divertor Optimization: A Unified MCF Divertor Framework Applied to Wendelstein 7-X

        The stellarator’s steady-state capability offers inherent advantages for fusion power plants (FPP), including disruption-free operation and access to higher densities beyond the Greenwald density limit. However, reconciling particle exhaust and retention while fulfilling mandatory requirements of divertor life-time survival remains a critical challenge for reactor-relevant divertor operation in stellarators and other magnetic confinement fusion (MCF) devices.

        At Wendelstein 7-X (W7-X), we employ a six- σ design methodology [1]— a data-driven framework that optimizes processes by quantifying a priori performance metrics within six standard deviations (σ) of process yield — combined with the Kano model [2]. Following the principle of form follows function, we categorized divertor requirements into mandatory survival criteria (e.g., resistance to heat, sputtering, and mechanical stresses) and functional performance metrics (particle exhaust and retention). These performance metrics were further decomposed into eight a priori first principles. Statistical metrics derived for each principle enable quantitative assessment of the W7-X island divertor’s current performance, shown in the table below, and facilitate direct comparisons with existing and future divertor concepts.

        A field-aligned, simple SOL density model is utilized, in which perpendicular transport processes are described by a single stochastic process with a uniform perpendicular diffusion coefficient. Based on the resulting normal distribution across common and private flux region, we present seven distinct target geometries applicable to any MCF device with diverted field lines. These designs employ distinct neutral-management strategies – prioritizing attached exhaust through the SOL or PFR, or re-ionization on the incident field line, the separatrix, or SOL density peak to drive volumetric ionization losses potentially leading to higher volume recombination ratios.

        A rapid modelling cycle based on anisotropic SOL diffusion EMC3-Lite modelling [3], coupled with COMSOL [4], solving the neutral transport in the molecular flow regime via the angular coefficient method and the continuous flow regime via differential equations, was established to evaluate strike line positioning and quantify the 1st, 3rd, 4th, and 5th of the 8 a priori metrics. We benchmarked these metrics for W7-X’s current divertor geometry with the 5/5, 5/4, and 5/6 resonant magnetic island configurations, and outline ongoing efforts in the W7-X divertor concept development, including the design and assessment of new divertor geometries.

        This principle-driven framework bridges stellarator-tokamak divides, offering unified divertor criteria for current and next-step MCF devices. By balancing reactor demands for particle control, retention and component longevity, it advances the path toward feasible FPPs.

        [1] YANG, Kai; BASEM, S.; EL-HAIK, Basem. Design for six sigma. New York: McGraw-Hill, 2003.
        [2] TONTINI, Gerson. Integrating the Kano model and QFD for designing new products. Total Quality Management, 2007, 18. Jg., Nr. 6, S. 599-612.
        [3] FENG, Y., et al. Review of magnetic islands from the divertor perspective and a simplified heat transport model for the island divertor. Plasma Physics and Controlled Fusion, 2022, 64. Jg., Nr. 12, S. 125012.
        [4] MULTIPHYSICS, COMSOL. Introduction to comsol multiphysics®. COMSOL Multiphysics, Burlington, MA, accessed Feb, 1998, 9. Jg., Nr. 2018, S. 32.

        Speaker: Dr Thierry Kremeyer (Max Planck Institut für Plasmaphysik)
      • 11
        Advancing SPARC Design Through Deterministic Methods: An Overview

        The design of fusion energy devices poses great challenges to the neutronics modeling community. Fusion devices, such as Commonwealth Fusion Systems’s (CFS) SPARC, are extremely complex devices characterized by a large number of components, streaming paths, and a spatially heterogeneous distribution of materials. In addition, the building that houses these fusion devices is characterized by thick walls of shielding with small diagnostics penetrations needed to allow neutrons to reach detectors and sensitive equipment. All these features make the estimation of particle flux and doses for shielding and detector design in fusion facilities very challenging, especially if quick design iterations are needed.

        Deterministic methods have been used for decades in neutronics modeling of engineering systems. Unlike stochastic Monte Carlo approaches, deterministic solvers rely on discretization of phase space to directly solve the Boltzmann transport equation, enabling rapid evaluations over large geometries and broad flux changes. This makes them particularly effective for parametric studies, sensitivity analyses and iteration design processes for commercial fusion device design where engineering and computational speed is essential. As such, deterministic transport simulations have become a key tool in shielding and neutron detector design activities at CFS.

        In this work, we present an overview of the application of deterministic methods for shielding and detector design analysis for SPARC. CFS utilizes the Attila deterministic solver to design shielding and estimate radiation doses in areas of the SPARC facility where neutron fluxes need to be attenuated by 6-10 orders of magnitude. These areas include the basement, buildings around the Tokamak Hall, and at the site boundary. Deterministic simulations have also been used to design shielding for the Diagnostics Hall wall, which is characterized by small penetrations that house different diagnostics systems. Furthermore, tools based on Attila deterministic simulations have been developed to support the design and calibration of neutron diagnostics systems. These tools leverage the adjoint flux and contributon concept to determine the source or scattering contribution to a detector response. When verified against Monte Carlo simulations and experimental benchmarks, we have found that deterministic approaches offer both speed and sufficient accuracy, making them an indispensable part of the design framework for advancing SPARC design. This presentation will showcase examples from different workflows and neutronics analyses, and discuss plans for future developments.

        Speaker: Andrea Saltos (Commonwealth Fusion Systems)
      • 12
        Divertor Monoblock Multiphysics Analysis Using the SALAMANDER Code

        Advanced computational tools play a crucial role in ensuring the rapid deployment of fusion energy systems due to the multiphysics interactions occurring at the component level. For example, plasma-facing components (PFCs), such as the divertor, undergo thermal loads and stresses, nuclear heating from neutrons and ions, and conjugate heat transfer in the solid material regions and water-cooling channels. High-fidelity multiphysics computational tools allow scientists and engineers to better understand the complex physical phenomena occurring in these components, ensuring their appropriate operation within fusion energy systems. To address these challenges, the Multiphysics Object Oriented Simulation Environment (MOOSE) framework is being leveraged for fusion modeling and simulation through the MOOSE-based Software for Advanced Large-scale Analysis of MAgnetic confinement for Numerical Design, Engineering & Research (SALAMANDER) code. SALAMANDER is designed as a multiphysics and multiscale computational tool capable of 3D high-fidelity fusion system modeling, and it leverages modular MOOSE-based physics capabilities such as thermal mechanics, heat transfer, and fluid dynamics. In addition to MOOSE, the MOOSE-based application Cardinal is employed by SALAMANDER for neutronics capabilities and the MOOSE-based Tritium Migration Analysis Program, version 8 (TMAP8) is used for tritium transport and fuel cycle simulations. Finally, SALAMANDER also possesses capabilities for plasma edge modeling using particle-in-cell to perform high-fidelity PFC simulations.

        This work aims to leverage SALAMANDER’s capabilities to conduct a case study on an ITER-like divertor monoblock. The divertor is responsible for managing and removing excess heat and impurities from the plasma, protecting the tokamak's components, and maintaining plasma stability during operation. The developed model utilizes its multiphysics capabilities to simulate the neutron interaction from the plasma and the thermal-hydraulic effects from the cooling channel on the divertor monoblock. The neutronics analysis employs a quasi-static approach to calculate heating from a Monte Carlo neutron transport simulation, where a planar neutron source is defined at the top boundary of the divertor monoblock. The same pulsing function is then used to normalize the OpenMC tally results. The thermal-hydraulics analysis utilizes the Reynolds-Averaged Navier-Stokes (RANS) approach, implemented using the Navier-Stokes module in MOOSE, to model fluid flow through the divertor cooling channel. The heat transfer between the solid divertor monoblock and the water-cooling channel is handled using an interface kernel. Tritium is modeled using TMAP8 and accounts for diffusion, trapping, and solubility. By incorporating advanced modeling tools into a fully unified framework, SALAMANDER serves as a key resource for advancing the deployment of fusion energy.

        Speaker: Lane Carasik (Virginia Commonwealth University)
    • Simulation and Data Integration
      • 13
        IMAS components for digital twins

        We present our recent and upcoming developments on open-source components of an IMAS simulation environment aiming at digital-twin functionality for tokamaks. These range from a convenient pulse schedule and waveform editor to control and actuator simulator interfaces and examples, stateful synthetic diagnostics, simulation databases, (experimental) data analysis pipelines, experimental databases, data validation toolkits and live IMAS data visualization. We'll present how these components fit together and how they are applicable to a wide range of simulators and workflows at different fidelities.

        Speaker: Dr Daniel van Vugt (Ignition Computing)
      • 14
        Development of a Mod-Sim Workflow for Multiphysics Assessment in EX-Fusion’s Laser Fusion Test Chamber

        The development of inertial fusion energy (IFE) reactors requires chambers that can withstand extreme cyclic loads, making multiphysics coupling a critical element of digital engineering for predicting integrity and safety. At EX-Fusion, we are developing a demonstration chamber—our “triple-one-ten” project—designed for 10 Hz operation with 1–10 kJ laser shots on deuterium pellets for up to one hour. This platform, which builds upon our prior milestone of the first 10 Hz target engagement with metallic mock targets, requires predictive modelling of neutron transport, thermal-mechanical stresses, fluid dynamics, and tritium transport under pulsed irradiation.

        To address this, we have developed a modelling and simulation (mod-sim) workflow integrating PHITS for particle transport, nuclear heating, and dose mapping; Ansys for CFD-driven structural fatigue and failure analysis; and, in future iterations, FESTIM for tritium retention and diffusion. Our goal is to have fields such as temperature, heat flux, and advection be passed iteratively between codes, enabling dynamic time-dependent coupling. We have successfully demonstrated coupled CFD-mechanical simulations of our chamber system and would like to extend this framework toward full multiphysics integration, including tritium transport and plasma-wall interactions.

        Our workflow is designed to evolve in lock-step with our reactor roadmap: starting with simplified models for low-flux DD operation (10⁴–10⁵ n/shot), scaling to higher-flux regimes (10¹¹–10¹³ n/shot), and eventually incorporating liquid-metal wall, blanket, and fuel-cycle systems modelling in later reactor phases with neutron fluxes exceeding 1016 n/shot. Our approach emphasizes fast iteration and adaptive workflow refinement informed by experimental results.

        By presenting our workflow in this workshop, we aim to contribute to the broader fusion community’s efforts in digital engineering, supporting the development of coupled simulations for IFE reactors. This work directly addresses the workshop’s focus areas of simulation and modelling techniques, data integration, and workflow configuration management, and provides a case study of how agile, multiphysics-driven design can accelerate reactor R&D.

        Speaker: Max Monange (EX-Fusion Inc.)
      • 15
        Bringing together integrated simulation, surrogates, and data to support digital twins for fusion engineering

        Comprehensive digital twins for fusion devices require many components: multifidelity multiphysics modeling and simulation to describe complicated, interconnected systems; reduced-order and surrogate creation capabilities to enable designer-focused modeling; uncertainty quantification and stochastic simulation to inform design decisions; interfaces and connections to data warehouses and other sources of archival and experimental data to enable data-driven insights; and computational infrastructure to sustain reproducible science and engineering. At Idaho National Laboratory and at partner institutions, work has been ongoing for the last 17 years to create a robust simulation, surrogate modeling, and stochastic tools ecosystem based on the Multiphysics Object-Oriented Simulation Environment (MOOSE) framework. MOOSE was originally built to focus on nuclear engineering technology development for advanced fission reactor systems, where multiphysics simulation is also vital to understand the long-term operation of a fission power plant. Here, we will discuss how several MOOSE-based applications—the Software for Advanced Large-scale Analysis of MAgnetic confinement for Numerical Design, Engineering & Research (SALAMANDER) for blanket-edge modeling, the Tritium Migration and Analysis Program, Version 8 (TMAP8) for tritium transport and fuel cycle analysis, the Zapdos application for scrape-off layer multi-fluid plasma modeling, the Cardinal application for high-fidelity computational fluid dynamics and neutronics, and the MOOSE Stochastic Tools Module for stochastic simulation and surrogate creation—combine to create a modular software ecosystem to support broad modeling efforts for fusion energy design and engineering. Also at INL, the Fusion Safety Program Archive and Nuclear Research Data System represent two efforts to bring together empirical research data (both historical and contemporary) with accessible high-performance computational resources. Finally, the Fusion Energy Data Ecosystem and Repository (FEDER), led by General Atomics for the FIRE Collaboratives, will provide a community data and metadata ecosystem with standards, ontologies, provenance, APIs, and a federated “data commons” to support the community-driven, collaborative development of critical fusion technologies. Altogether, these projects and programs represent a shift in fusion energy design and engineering toward interoperable, modular, interconnected systems to hasten the progress toward fusion power plants. It also demonstrates how harvesting from ongoing efforts in nuclear engineering for fission systems can accelerate development for emerging fusion technologies.

        Speaker: Casey Icenhour (Idaho National Laboratory)
    • 10:25
      Coffee Break
    • Simulation and Modelling Techniques
      • 16
        Towards Accurate Uncertainty Estimates in High-Fidelity Radiation Transport Simulations of Fusion Power Plants

        Accurate predictions of neutron behavior are central to the design of fusion power plants, yet the confidence we can place in those predictions is often just as important as the nominal results. This talk will examine the landscape of uncertainty in high-fidelity Monte Carlo (MC) and deterministic radiation transport simulations and the steps being taken to bring rigorous uncertainty quantification (UQ) into existing fusion digital engineering workflows.
        Sources of uncertainty arise at many levels: nuclear data uncertainties affect estimates of quantities like the tritium breeding ratio, structural activation, or energy deposition; geometric fidelity, particularly in complex first wall and blanket assemblies, strongly impacts localized damage and heating, streaming, and shielding predictions; and statistical uncertainties from MC sampling affect complex MC-coupled workflows like shutdown dose rates. Other high-impact uncertainty sources include material property data for advanced coolants and structural alloys, where uncertainties propagate into both thermal and nuclear performance estimates.
        This presentation will highlight which uncertainties can be reliably addressed in current digital engineering workflows for neutron transport and where critical gaps remain. Recent progress in OpenMC towards handling these challenges in CAD-based engineering workflows will be presented with a focus on unique integrated capabilities for both forward and adjoint solutions on complex geometries. By clarifying this landscape and presenting ongoing work to close some gaps in uncertainty analysis, we aim to guide where digital engineering efforts can most effectively reduce risk and build confidence in fusion power plant design.

        Speaker: Ethan Peterson (Massachusetts Institute of Technology)
      • 17
        Coupled Neutronics–CFD Workflow for ARC Tokamak Blanket and Coolant Design

        The ARC tokamak is a compact, high-field fusion pilot plant being designed by Commonwealth Fusion Systems to produce net electricity with high-temperature superconducting magnets. It uses a molten salt, FLiBe blanket for tritium breeding and heat removal in a simplified, high-performance design.
        We present a multiphysics digital engineering workflow that integrates mesh-based Monte Carlo neutronics (MCNP6) with ANSYS CFD to support increasingly detailed design of ARC’s blanket, vessel, and coolant systems. The workflow couples nuclear heating maps from MCNP6 with thermal–hydraulic simulations: heating distributions inform FLiBe flow requirements, while CFD returns temperature and velocity fields that guide material selection, structural optimization, and flow path design. Beyond heat transport, the workflow also models the generation and circulation of activation products within the FLiBe, particularly important isotopes such as N-16, F-18, F-20, and O-19. These short-lived radionuclides create a moving source term that extends outside the tokamak into the balance-of-plant, particularly at heat exchangers with the secondary nitrate salt loop. By linking neutronics with CFD, the workflow enables steady-state mapping of isotope distributions and the resulting dose fields throughout the facility. Since activated salt requires radiation shielding and contributes significantly to the plant’s external source term, it is advantageous to minimize radionuclide concentrations leaving the vessel. The workflow addresses this optimization challenge by providing a means to evaluate and design flow paths that maximize in-vessel decay before the coolant exits the machine.
        This paper describes the architecture of the data exchange and solver coupling, and presents preliminary results that highlight the potential of integrated multiphysics workflows to accelerate fusion power plant design while providing a more complete picture of nuclear heating, coolant performance, and radiological safety.

        Speaker: Austin Carter (Commonwealth Fusion Systems)
      • 18
        Tokamak Model Digital Lifecycle for Neutronics Simulation Applications

        At Commonwealth Fusion Systems (CFS), we are developing the SPARC fusion device, a high-field, compact tokamak designed to achieve net energy gain to demonstrate commercial viability of fusion energy [1]. Our fusion neutronics team is meeting the fast-paced needs of designers by executing compact, highly detailed neutronics models to influence the design of the SPARC device.
        Our in-device neutronics workflow relies on a detailed Computer Aided Design (CAD) model of the SPARC device to quickly analyze designs and verify SPARC meets its requirements. We have developed a process to directly integrate detailed CAD models of SPARC into our neutronics workflows to enable our team to provide accurate data that meets the designers’ timelines.
        The neutronics CAD model of SPARC comes directly from the SPARC Top Level Assembly (TLA). The SPARC TLA is an evolving CAD assembly made up of over a million parts within the Teamcenter and Siemens’ NX software [2,3]. The fusion neutronics team works directly with system designers to defeature the system models from the SPARC TLA to prepare a new model of SPARC specifically for neutronics analysis (Neutronics TLA). Our workflows utilize unstructured mesh, which does not require the high level of defeaturing that is needed for Constructive Solid Geometry (CSG), the commonly used geometry format for radiation transport analysis. Tools within Siemens’ NX are used to streamline the defeaturing process. The defeaturing process takes the over one million system parts of the SPARC TLA to about fifty system parts for the Neutronics TLA.
        In addition to the defeaturing process, custom NX open applications have been created specifically for the neutronics process to ease in process time, produce consistent files, and provide traceability throughout the workflow. Material assignments are tied directly to the CAD models and a custom NxOpen program was created to make the process of assigning materials easier and ensure that correct assignments are made. Tools have also been developed to rename solid bodies within NX to carry over key parameters when exporting CAD models to other software. Our neutronics workflows rely on different sector sizes of SPARC (e.g., 20-degree sector model) for analysis, and we have developed tools to create and export various sector cuts within our Neutronics TLA. Once files are reviewed and verified to mesh and function correctly within the neutronics workflow, the Neutronics TLA is put through a release process within Teamcenter for traceability purposes.
        Leveraging Teamcenter and Siemens’ NX to build a detailed, realistic, and version-controlled model of SPARC has allowed our fusion neutronics team to keep up with the fast-paced needs of tokamak design for commercial purposes.
        References:
        1. J. Creely, M. J. Greenwald, S. B. Ballinger, et al., "Overview of the SPARC tokamak," Status of the SPARC Physics Basis, Focus on Fusion, Published online by Cambridge University Press, (2020).
        2. Siemens Digital Industries Software. (2024). Siemens NX (Version 2406).
        3. Siemens Digital Industries Software. (2024). Teamcenter (Version 2406).
        Acknowledgements: Work supported by Commonwealth Fusion Systems

        Speakers: Amanda Johnson (Commonwealth Fusion Systems), Loren Brandenburg (Commonwealth Fusion Systems)
    • Lunch
    • Simulation and Data Integration
      • 19
        Mesh and 3D Data Analysis Technology for Digital Twin

        A comprehensive digital twin for fusion energy requires advanced tools for handling complex 3D data. This presentation outlines an integrated technology suite that streamlines the entire simulation workflow, from mesh generation to data analysis. We have developed a robust pipeline that automates the creation of simulation-ready meshes from CAD geometry, significantly accelerating the design-to-analysis cycle for fusion reactors.

        Our framework supports the diverse and scalable mesh configurations essential for fusion modeling, including 2D surface meshes of plasma-facing components and 3D partitioned plasma wedges managed by the PUMI infrastructure. Core innovations include a high-performance algorithm for accurately detecting particle-wall collisions and advanced visualization of complex mesh data using Plotly and the Unreal Engine. Ultimately, these technologies are being developed for integration into WILL (Versatile Virtual platform for Integrated fusion simuLation and anaLysis), the digital twin platform under development at the Korea Institute of Fusion Energy.

        Speaker: Eisung Yoon (Ulsan National Institute of Science and Technology)
      • 20
        PCMS: A Geometry and Discretization Aware Multi-physics Coupling Tool for Fusion Devices

        Coupling of fusion device codes to engineering analysis codes present unique challenges in the physical and temporal scales the computations must take into account, range of coordinate systems, high dimensionality of phase space, and geometric complexity. These challenges require new approaches that enable efficient coupling on exascale supercomputers and afford adherence to physical constraints such as conservation of moments that are required for stable coupling schemes. In this talk, we introduce developments in the Parallel Coupler for Multimodel Simulation (PCMS) to support physics preserving coupling of fusion codes (e.g., neutrals, gyrokinetic microturbulence, neutronics) to each other and to traditional engineering analysis codes such as finite elements. Key aspects include the handling of geometries ranging from engineering and parameterized CAD models to physics-based geometries such as DESC, VMEC, and geqdsk. Additional features include the handling of conservative field transfer methods in up to five dimensions and dealing with the distributed coordination and control of partitioned, exascale simulations. In addition to presenting an overview of the key functionalities in PCMS, this talk will demonstrate specific example couplings achieved.

        This research was supported by the U.S. Department of Energy Office of Science FES and ASCR through four SciDAC-5 Partnership Centers (1) StellFoundry: High-fidelity Digital Models for Fusion Pilot Plant Design (DE-AC02-09CH11466), (2) HiFiStell: High-Fidelity Simulations for Stellarators (DE-SC0024548), (3) Computational Evaluation and Design of Actuators for Core-Edge Integration (CEDA) (DE-AC02-09CH11466), (4) Center for Advanced Simulation of RF – Plasma – Material Interactions, and the FastMath SciDAC institute (DE-SC0021285).

        Speaker: Jacob Merson (Rensselaer Polytechnic Institute)
      • 14:55
        Walk from Hackerspace to PSFC
      • 15:15
        PSFC Tour
    • Poster Session (at PSFC)
      • 21
        An Open-Source Divertor Digital Twin Environment for Fusion Power Plants

        Digital engineering is reshaping fusion R&D, and the in-development Divertor Digital Twin Environment (DDTE) aims to provide an end-to-end, open-source workflow that shortens the path from late-stage divertor design to plant operation readiness. The DDTE is organised around three complementary flavours, each deliberately modular so that best-in-class community codes can be swapped in as they mature.

        Design Studio – Starting from native CAD or equilibrium geometry, the pipeline invokes established mesh generators to create prototype meshes and optimises candidate diagnostics. Material and thermal properties are inserted via OMAS-compatible databases, ready for local or HPC execution.

        Scenario Lab – 3D plasma-surface interaction scenarios are assembled by chaining HEAT, FUSE and, as development continues, edge-SOL solvers such as SOLPS-ITER and HERMES-3. Each run outputs time-resolved temperature, stress and erosion fields annotated with VVUQ metadata, and feeds a prognostics-and-health-management module estimating damage accumulation.

        Twin Console – A divertor digital twin instance will ingest live diagnostic streams (infrared, thermocouples) and fuse sensor data before assimilating into an ensemble of scenario predictions through Bayesian state estimation. The console reconciles data gaps and forecasts lifetime & maintenance windows.

        Key characteristics are: (i) free and open-source software licensing to encourage broad uptake; (ii) an intuitive GUI with scripting back-ends to assist commercialisation; (iii) active multi-institutional co-design to avoid “reinventing the wheel”; and (iv) easy-click installer and pre-built Apptainer deployment that scales from laptops to clusters. Current prototyping milestones are presented together with a roadmap that charts the next steps toward a minimally viable DDTE.

        Speaker: Michael Battye (University of York)
      • 22
        An Overview of Methods and Applications for Data-Driven Predictive Maintenance in Fusion Devices

        Design activities for pilot fusion power plants are progressing worldwide, with the objective of demonstrating stable, reliable energy production and economic viability. The transition from ITER to next-generation fusion power plants marks a pivotal shift from a science-driven initiative to an industry- and technology-oriented programme. Consequently, future demonstration reactors, such as DEMO, must meet ambitious Reliability, Availability, Maintainability, and Inspectability
        (RAMI) targets [1]. These include remote maintenance of the fusion core within acceptable time frames, routine operation with minimal unscheduled shutdowns, and an availability exceeding 50%, with a trajectory toward commercially viable levels. The Nuclear Fusion research unit (Infusion) at Ghent University supports this transition by developing data-driven predictive maintenance (PdM) strategies based on advanced statistical and machine learning techniques. This contribution illustrates the methodology through two representative use cases: (i) The development of a condition monitoring algorithm for circuit breakers in the ohmic heating circuit at JET, a critical component whose failure could lead to unexpected interruptions of plasma operation [2]. (ii) The estimation of the remaining useful life of beryllium tiles subjected to steady-state thermal loading, as an example of PdM applied to plasma-facing components, a subsystem whose unexpected failure can lead to several months of reactor downtime [3].

        [1] Maisonnier, D. (2018). RAMI: The Main Challenge of Fusion Nuclear Technologies. Fusion Engineering
        and Design, 136, 1202–1208. https://doi.org/10.1016/j.fusengdes.2018.04.102
        [2] L. Caputo et al. (2023). Predictive maintenance in fusion devices with an application to the ohmic
        heating circuit at JET. 30th IEEE Symposium on Fusion Engineering (SOFE 2023), Abstracts,
        Oxford, United Kingdom.
        [3] L. Caputo et al. (2025). Predictive maintenance in fusion devices: Application to condition
        monitoring of plasma-facing components. 31st IEEE Symposium on Fusion Engineering (SOFE
        2025), Abstracts, Cambridge, MA .

        Speaker: Leonardo Caputo (Ghent University)
      • 23
        Bayesian Experimental Design to Enable Digital Twins

        Fusion is fundamentally cutting-edge, and to achieve economic fusion energy the field must advance the understanding of engineering, materials, and plasma phenomena through experiments and test facilities. When designing a facility or experiment, traditional approaches of diagnostics, control, or experimental setup can often rely on manual or intuitive decision making. This can often be very time consuming and expensive and may pose significant risks to the aims of the experiment. In this work we have developed a novel methodology for designing fusion experiments using Bayesian experimental design.
        Bayesian experimental design is a framework to assess the ability of a system to be measured and controlled, based upon Bayesian theory and probabilistic AI methods such as Gaussian Processes. The framework inherently accounts for uncertainties in diagnostics, controllers, and models. It can be used by diagnosticians or decision makers to assess the uncertainty or risk of a design, or make automated designs, such as the viewing angle or placement of a camera. Crucially, this is done through the lens of information gain - a measure of how well a sensing system can differentiate between different simulated states within uncertainty.
        Practically, this novel approach represents a fundamental shift in how sensors and experiments are designed, to be more flexible, integrated, and holistic. Flexible, in that the single framework can answer many diverse questions about a design, including the uncertainty of key quantities of interest, or how sensing performance reduces under failure. Integrated, in that it determines the added information gained by combining data from sensors which may conventionally be kept separate. And holistic, in that it assesses a sensor set’s ability not to measure one value, but to distinguish between different possible states of the fusion experiment as an entire system.
        The benefits of this Bayesian design framework make it seamlessly compatible with digital twins—virtual counterparts of physical fusion devices. In fact, the basic structure of a digital twin is a set of experimental data that is compared to simulations of the real asset, to identify which simulated state is most likely and track this digital version of the asset. Digital twins also propose to link information from disparate diagnostics together. The proposed framework designs exactly for this, analysing the ability for a complex integrated diagnostic system to distinguish between simulated states. Though digital twins are not the only application of this framework, they represent an application that is in need of modern design tools.
        To validate the proposed method, the UKAEA and digiLab have applied the Bayesian design software to challenges across fusion. For example, this framework was used to assess how uncertainties in equilibrium reconstruction of MAST are impacted by various levels of sensor failure. Additionally, this framework was applied to automatically recommend optimal integrated designs of thermocouples on materials test facilities. Though this initial demonstration work shows the capabilities of Bayesian design, this tool can be applied to a wealth of other areas in the fusion.

        Speaker: Cyd Cowley (digiLab)
      • 24
        Development of a data framework for conceptual design of fusion reactor

        Iterative simulations and analyses are required during a conceptual design of fusion reactor with continuous changes of the design. It is important to keep track of physical and engineering rationale for the design modifications with systematic linkages of back data. This becomes increasingly more important as the design processes become semi-automatic employing advanced design optimization algorithms such as Bayesian optimization, genetic algorithm etc. In this contribution, we report our recent progress in developing a dedicated data framework to support this kind of design changes arising from a conceptual design of fusion reactor. Employing the GitLab and Data Version Control (DVC) technology [1,2], a framework is being developed to handle and track both design data and large sized simulation or analysis data associated with design update. We apply the preliminary version of the data framework to a design optimization process and critically examine the feasibility of the framework. We also discuss a way to coordinate this data framework with the digital twin platform [3,4] to virtualize a design-phase fusion reactor.

        [1] GitLab. https://about.gitlab.com/
        [2] DVC. https://dvc.org/doc
        [3] Jae-Min Kwon et al., Fusion Eng. Des. 184 (2022) 113281
        [4] Jae-Min Kwon et al., IEEE Trans. Plasma Sci. 52 (2024) 3910

        Speaker: JAE MIN Kwon (National Fusion Research Institute)
      • 25
        Development of a Neural Operator-Based Surrogate Model for Accelerated Gyrokinetic Transport Calculations

        The accurate prediction of turbulent transport in magnetically confined fusion (MCF) plasmas relies heavily on first-principles gyrokinetic simulations. However, the high computational cost of these calculations—often requiring weeks to months on high-performance computing platforms, presents a significant bottleneck for their inclusion in integrated modeling workflows and the rapid analysis of tokamak experiments. Consequently, the fusion community often relies on reduced-order transport models, such as TGLF, which necessarily trade some physical fidelity for computational tractability.

        This work presents our progress in developing a machine learning (ML)-based surrogate model for δf flux-tube gyrokinetic simulations, aimed at overcoming this computational barrier. We leverage physics-informed neural operators, a class of deep learning models adept at learning the solutions to parametric partial differential equations, to create a fast and accurate surrogate. Our approach involves training the model on a database of high-fidelity gyrokinetic simulations spanning a wide range of physical input parameters.

        In this presentation, we will detail the architecture of our neural operator model, the training methodology, and the current status of its development. We will showcase validation results against unseen gyrokinetic simulation data and discuss performance metrics, focusing on both accuracy and computational speed-up. The ultimate goal of this research is to create a surrogate capable of providing near real-time turbulent transport predictions. Such a tool could be a transformative component for advanced control strategies, experimental planning, and the eventual development of high-fidelity digital twins for fusion devices, thereby contributing to the international effort to accelerate fusion development.

        Speaker: Abetharan Antony (Zenithon Ai)
      • 26
        Digital Engineering for Fusion: Uncertainty Quantification in Neutronics Modeling

        Advancing digital engineering for fusion energy requires domain-specific models that can support design, enable predictive simulations, and faithfully replicate experimental results, while also quantifying the associated uncertainties. We demonstrate an uncertainty quantification (UQ) workflow applied to a high-fidelity computational model of a benchmark fusion-relevant neutronics experiment. The workflow integrates multiple sources of input data uncertainties and propagates them through the simulation framework. This approach enables a rigorous assessment of predictive confidence and highlights the key drivers of variability in neutronics responses. The results underline the importance of embedding UQ into digital engineering pipelines, laying the groundwork for future uncertainty-aware digital twins of fusion components and facilities.

        Speaker: Stefano Segantin
      • 27
        Integrating Edge Analytics and HPC for Autonomous HED Experimental Workflows

        High Repetition Rate High Energy Density (HED) physics facilities are rapidly becoming a cornerstone for the development of next-generation compute, control, and optimization infrastructures required by emerging Inertial Fusion Energy (IFE) platforms. As the demand for more sophisticated and responsive experimental setups grows, the ability to efficiently process and analyze vast amounts of diagnostic data generated at high repetition rates is paramount. Establishing robust and scalable workflows that can ingest high-throughput, per-shot diagnostic data, perform edge analytics, and seamlessly integrate with high-performance computing (HPC) systems is now recognized as a foundational requirement for the advancement of IFE research.
        The deployment of automated workflows across the entire compute continuum—from the initial data acquisition at experimental facilities to remote HPC resources—presents a complex challenge. It requires not only technical coordination and orchestration but also the development of interoperable systems capable of bridging diverse hardware and software environments. The goal is to enable real-time feedback, optimization, and control, thereby reducing the need for manual intervention and accelerating the pace of experimental innovation.
        Conducting Experimental campaigns at the Extreme Light Infrastructure (ELI), we successfully developed and demonstrated an end-to-end, closed-loop experimental workflow. This workflow enabled the principle of concept remote control and optimization of Laser Wakefield Acceleration (LWFA) generated X-rays by manipulating plasma characteristics (i.e. density profile) directly from remote HPC platforms. The system was designed so that human intervention was only necessary for the final validation of machine learning-generated experimental parameters at the laser facility, prior to their application to the experiment. This significant reduction in manual oversight not only streamlined operations but also showcased the potential for autonomous experimental control.
        The trans-Atlantic control system that underpinned this workflow was built upon stitching together technologies from high-throughput edge compute infrastructure, cloud-based data communication and HPC-based workflow tools. First, a containerized EPICS-based diagnostics control and data acquisition framework ensured reliable and modular management of experimental hardware. Second, a time-synchronized data archival mechanism was implemented to guarantee the integrity and traceability of all acquired data. Third, an event-driven data processing and filtration pipeline was deployed at the edge, enabling rapid analysis and selection of relevant data for further processing. Fourth, a secure, end-to-end encrypted data communication was facilitated via a cloud-hosted data exchange platform, while ensuring both the privacy and reliability of data transfers across continents. Finally, a modular machine learning pipeline was established leveraging HPC workflow practices both for training and inference to optimize experimental parameters.
        This work demonstrates how experimental requirements, facility constraints related to data privacy and accessibility, and user operability shaped our workflow design, revealing both areas for improvement and potential pitfalls to avoid. Our approach represents a significant step forward, laying the foundation for future cross-facility, data-driven optimization of experiment-integrated scientific workflows.

        Speaker: Abhik Sarkar (Lawrence Livermore National Laboratory)
      • 28
        Leveraging Bayesian optimization for automated plasma composition optimization of ARC with physics-based turbulent transport models

        Despite the existence of physics-based turbulent transport models, new tokamaks have historically initially been designed using empirical scaling laws due to the large computational expense of physics-based models. However, these empirical models do not capture the full changes caused by alterations to the plasma composition and geometry. Here, we optimize the ARC tokamak (Howard, et al., JPP, Submitted 2025) with respect to effective charge state (Zeff), main ion fraction (fmain), pedestal density (neped), elongation (κ), triangularity (δ), and squareness (ζ). Modeling is performed using MAESTRO, an integrated modeling tool using TGLF as the transport model (Staebler, NF, 2021) and EPED for pedestal predictions (Snyder, NF, 2011). We find increasing the amount of impurity in the plasma can increase the fusion power performance. Bayesian optimization is employed to expedite the process of finding the best operating point with relatively expensive physics-based models. In the future, we will increase the number of free parameters, pushing towards enabling design work to start from physics-based turbulent transport models.

        Acknowledgements: This work is supported by Commonwealth Fusion Systems, under RPP020.

        Speaker: Audrey Saltzman (MIT PSFC)
      • 29
        Surrogate model generation using high-fidelity CGYRO predictions enabled by active learning techniques

        A. Ho1, L. Zanisi2, B. de Leeuw3, V. Galvan1, P. Rodriguez-Fernandez1, N. T. Howard1
        1MIT Plasma Science and Fusion Center, Cambridge, MA, USA
        2Culham Centre for Fusion Energy – United Kingdom Atomic Energy Authority, Abingdon, UK
        3Radboud University, Nijmegen, Netherlands
        This study applies uncertainty-aware neural network architectures in combination with active learning (AL) techniques [L. Zanisi et al., NF 2024] to construct efficient datasets for data-driven surrogate model generation including the simulator in-the-loop. This was applied to the tokamak plasma turbulent transport problem, specifically the QuaLiKiz code [J. Citrin et al., PPCF 2017], generating surrogates aimed at accelerating profile predictions in transport solvers such as PORTALS [P. Rodriguez-Fernandez et al., NF 2024]. This exercise focuses on small datasets as a proxy for using more expensive gyrokinetic codes, e.g. CGYRO [J. Candy et al., Jour. Comp. Sci. 2016], which can be bootstrapped by leveraging gyrokinetic simulation databases. A combination of classifier and regressor model was trained for all turbulent modes (ITG, TEM, ETG) and all transport fluxes provided by QuaLiKiz.
        Starting with an initial training data set of 102 points, 45 iterations were performed resulting in a final set of 104 and models with a F1 classification performance of ~0.8 and a R2 regression performance of ~0.8 on an independent test set across all outputs. Additionally, the overall technique is generalizable to create surrogate models beyond the primary domain being studied. This was demonstrated by applying this pipeline on the EPED pedestal stability model [P. Snyder et al., NF 2011], obtaining a R2 of ~0.85 after 22 iterations with a final data set of 103 points.
        This work is supported by Commonwealth Fusion Systems under RPP020 and DOE FES under Award DE-SC0024399

        Speaker: Aaron Ho (MIT PSFC)
    • Simulation and Modelling Techniques
      • 30
        Digital Design and Safety Evaluation of Superconducting Magnet Systems for Fusion Device
        Speaker: JINXING Zheng
      • 31
        REIMS - Riemann Explicit Implicit Magnet Simulator, new tool for calculating superconductor performance

        A new thermo-hydraulic simulation tool, REIMS (Riemann Explicit Implicit Magnet Simulator), has recently been developed at ITER to model the behavior of superconducting magnet systems. REIMS can simulate normal operation scenarios as well as magnet cool-down, and work is underway to extend its capabilities to quench studies, with promising results for predicting stability and margin to quench. The code achieves accuracy comparable to, and in some cases better than, existing tools while requiring orders of magnitude less computation time. This combination of speed and accuracy makes REIMS a powerful digital innovation for supporting magnet design, validation, and operational studies in fusion research.

        Speaker: Dr Jacek Kosek (ITER Organization)
      • 32
        Cardinal: Multiphysics and Multiscale Simulation for Fusion Applications

        Modeling and simulation plays a central role in fusion technology development - by allowing large parameter space exploration for design optimization, focused design of experiments and instrumentation, and safety analysis. This talk will provide an overview of the Cardinal multiphysics framework which integrates OpenMC, NekRS, and MOOSE for high-fidelity simulation. Several fusion-related projects including adaptive mesh refinement of Monte Carlo tallies, tritium extraction systems, and liquid metal MHD will be described. An ongoing project to bridge fast-running neutronics analysis with materials optimization, and an open materials database through ARPA-E, will also be described.

        Speaker: April Novak (University of Illinois, Urbana-Champaign)
    • 10:25
      Coffee Break
    • Vendor Session
      • 33
        Application of Digital Engineering and MBSE Methodology to De-Risk Component Design

        The presentation will show how to use a Digital Engineering Environment (DEE) to demonstrate the value of Digital Engineering (DE) across the full lifecycle of a system with a traceable digital thread. We will show the use of digital artifacts across Model-Based Systems Engineering (MBSE), Multidisciplinary Analysis and Optimization (MDAO), including the use of Digital Twins for applications like Condition Based Maintenance (CBM).

        The presentation will focus on the design of a heat exchanger that includes: (1) The implementation of an acceptable design utilizing MBSE within the Ansys DEE, (2) Design space exploration using MDAO, (3) the use of digital twins within the Ansys DEE for operation optimization and condition-based maintenance, and (4) the implementation of a Simulation Process and Data Management (SPDM) system to enable simulation data and models to be preserved and managed in a structured, traceable, and reusable manner. The SPDM system also allows teams to work effectively together using the available collaboration tools even when the team is not geographically located together.

        Speaker: Daniel Iliescu (Ansys-Synopsys)
      • 34
        Use Cases of Merit for AI in Simulation

        This talk looks at how AI is changing the way we do simulation and design, with CFD-driven digital twins as the central thread and nuclear systems as a motivating example. Rather than treating AI as a black box “accelerator,” we’ll focus on how different strands of AI (especially geometric deep learning) can be used to work with our existing numerical methods and engineering judgment. In particular, we’ll look at how models that operate directly on meshes, fields, and graphs open new possibilities for CFD, turbulence modeling, and coupled multi-physics problems. We’ll anchor the discussion in two complementary perspectives. The first is strategic: where does AI realistically fit into our simulation roadmaps (for example, the 2030 vision for CFD and digital twins)? The second is practical: a case study which illustrates specific ways AI can support engineering teams today, from guiding sampling and optimization campaigns to improving diagnostics and decision support. Because this is a safety-relevant domain, we will spend time on how to use these tools responsibly (as we want these complex models to make more explainable predictions to engineers rather than data scientists alone). We’ll also briefly touch on foundation models and generative AI in engineering—what is hype, what is real, and how these models might add value for scientific and engineering tasks. Overall, the emphasis is on realistic, engineering-minded use of AI that enhances, rather than replaces, traditional simulation and expertise.

        Speaker: Justin Hodges
      • 35
        (title pending) NVIDIA
        Speaker: Tom Gibbs (NVIDIA)
    • Lunch
    • Simulation and Modelling Techniques
      • 36
        Virtual Tokamak for Integrated Physics and Engineering Analysis

        Recent progress in the development of a virtual tokamak platform is presented, which aims to integrate physics simulations with engineering analyses for fusion R&D. The platform, named WILL (Versatile Virtual platform for Integrated fusion simuLation and anaLysis), provides such integration by flexibly and seamlessly bridging data from tokamak operations, experiments, and simulations. WILL builds upon the enabling technologies and software originally developed for the Virtual KSTAR project, but restructures and modularizes them by separating KSTAR-specific features and generalizing the framework for application to arbitrary tokamak devices.
        The key technologies and software incorporated into WILL include: (1) unstructured meshes for discretizing arbitrarily complex 3D models and storing physics and engineering data, (2) a dedicated data framework for managing diverse fusion data, built upon IMAS and HDF5 technologies, (3) Python and C++ libraries for 3D data analysis, such as collision detection between dynamic and static objects, and (4) 3D visualization software based on the Unreal and Unity3D graphics engines.
        With these enabling technologies, WILL supports full 3D fusion simulations and analyses with model fidelity suitable for engineering R&D. Several examples of such 3D simulations and analyses are reported, and ongoing efforts are also introduced that aim to establish WILL as a platform for virtual experimentation.

        Speaker: Dr Chanyoung Lee (Korea Institute of Fusion Energy)
      • 37
        Fusion Middleware. A comparison of state of the art systems in research, industrial manufacturing and cloud.

        Middleware is the connective layer that fuses data, models, and control across heterogeneous systems, enabling end-to-end workflows across a large number of systems. This talk compares the state of the art in middleware across three domains, large-scale experimental laser and MCF facilities, industrial manufacturing, and cloud platform, highlighting similarities and differences. We also discuss the main technologies behind these middleware platforms and the conceptual frameworks behind these systems.

        In research settings, middleware emphasizes agility and reproducibility. Software systems such as EPICS and TANGO coordinate large-scale sensor and actuator networks while being easily extensible. The data generated in these platforms is typically extremely high volume due to the resolution needed in order to capture plasma effects. This puts demanding requirements on the data transfer mechanism, the networking backbone as well as control loop systems. The challenge is that academic stacks often optimize flexibility over operational determinism and long-horizon lifecycle management.

        Industrial manufacturing focuses on different problems. Integrating OT and IT with strict requirements for reliability, real-time performance, and safety. Here, middleware centers on deterministic messaging, standardized industrial protocols, and secure edge gateways bridging brownfield equipment. Systems are becoming increasingly hybrid between cloud and edge computing. The challenge is balancing vendor ecosystems and lock-in against interoperability while meeting certification, maintenance, and uptime SLAs.

        Cloud-native middleware privileges elasticity, global reach, and managed governance. This domain is largely driven by zero-trust infrastructure, which enables secure systems on a global scale. In this area, hybrid patterns (edge preprocessing combined with in-cloud analytics) emerge as the default. However, typically the data inside global applications is often not as volatile, unstructured and dense as compared to manufacturing systems or large-scale research setups.
        Across domains, common patterns include publish–subscribe messaging, schema-first contracts, central orchestration systems, and ETL data pipelines with lifecycle-managed schemas. Event-driven architectures enable data extraction and archiving in large database systems for structured and unstructured data. We discuss a common set of technologies and patterns used in all domains and survey convergence trends, for example OPC UA and MQTT bridging to cloud-native streams. Attendees will leave with a general taxonomy of middleware, a clearer overview of current open-source projects, and a review of common trends across all discussed domains.

        Speaker: Moritz Kröger
      • 38
        FREDA: A Multi-Fidelity Plasma-Engineering Integrated Modeling Platform for Fusion Reactor Design and Assessment

        Integrated modeling of fusion reactor design is essential for predicting self-consistent multi-physics loads (thermal, electromagnetic, plasma, neutron, etc.), assessing technical feasibility, quantifying uncertainties, and enabling design trade-off studies to de risk FPP concepts and guide meaningful validation experiments. The Fusion REactor Design and Assessment (FREDA) SciDAC project is developing a flexible component-based integrated plasma and engineering modeling framework to support end-to-end, multi-fidelity reactor design workflows.

        FREDA combines theory-based physics and engineering models from both the fusion and fission communities for self-consistent, iterative assessment and optimization. The framework builds on the IPS-FASTRAN plasma simulation backbone, incorporating newly coupled Core/Edge Pedestal/Scrape-Off-Layer models for predictive plasma performance and self-consistent wall and divertor heat and particle loads from charged particles, neutrals, and photons. FREDA also includes the FERMI suite of multiphysics engineering analyses (structural mechanics, thermal hydraulics, computational fluid dynamics, electromagnetics, and nuclear performance) to evaluate components such as the first wall, blanket, and magnets. Parametric CAD representation is generated with a newly developed TRACER tool, and a new AI/ML divertor meshing capability enables scans in geometry and operational constraints. Example workflows will be shared.

        Supported by the FREDA SciDAC and US DOE DE-AC05-00OR22725

        Speaker: Cami Collins (Oak Ridge National Laboratory)
    • 15:25
      Coffee Break
    • Simulation and Modelling Techniques
      • 39
        Electromagnetic Simulations in MOOSE using the MFEM Finite Element Library

        The ability to generate actionable qualification data by modelling the coupled multiphysics of components with complex geometries in fusion-relevant environments is required in order to de-risk candidate component designs prior to installation in fusion devices. Such models demand the use of highly scalable tools on HPC systems, capable of solving coupled problems consisting of billions of degrees of freedom in parallel, without imposing excessive licensing costs for end users.

        In this talk, we shall present recent capabilities added to the open-source MOOSE framework [1] for the simulation of coupled large-scale electromagnetics problems, enabled via the integration of the MFEM finite element library [2]. Such capabilities include the use of arbitrary order vector-valued finite element types spanning the de Rham complex, the use of complex variables and integrators, and support for problem set-up, assembly, and solution on CPU or GPU architectures. We shall demonstrate the application of these new capabilities in solving magnetostatic and magnetodynamic problems in both the time and frequency domain, and report on our ongoing work extending these capabilities to nonlinear scenarios.

        [1] Harbour, L., Giudicelli, G., Lindsay, A. D., et al.; 4.0 MOOSE: Enabling massively parallel Multiphysics simulation, Journal of Software X, 31, (2025)

        [2] Anderson, R. Andrej, J., Barker, A. et al.; MFEM: A modular finite element methods library, Computers & Mathematics with Applications, 81, (2021)

        This work has been funded by the Fusion Futures Programme. As announced by the UK Government in October 2023, Fusion Futures aims to provide holistic support for the development of the fusion sector.

        Speaker: Alexander Blair (UK Atomic Energy Authority)
      • 40
        Validation of FESTIM Hydrogen Transport Modeling in FLiBe Through HYPERION Permeation Data

        Molten-salt-based breeder blankets for tritium breeding in fusion reactors offer distinct advantages by combining tritium breeding, heat removal, and tritium extraction within a single system. The successful design of these blankets relies on accurate characterization of key transport properties, especially the permeability of molten salts to hydrogen. The HYPERION experiment at MIT PSFC was established to directly measure hydrogen isotope permeation through FLiBe under well-controlled conditions. Its configuration, involving hydrogen transport across both metallic (Ni) and salt (FLiBe) domains, provides data well-suited for benchmarking computational tritium transport models.
        FESTIM (Finite-Element Simulation of Tritium in Materials) is an open-source finite-element framework designed for multi-material, multi-species hydrogen isotope transport. In this study, FESTIM is applied to reproduce the permeation flux curves measured in HYPERION across a range of temperatures and operating conditions. By varying FLiBe permeability in the simulations, the model successfully reproduces experimental permeation behavior, with simulated permeability data in good agreement with the measurements.
        This benchmarking exercise demonstrates FESTIM’s reliability as a tritium transport simulation tool and underscores its capability as a predictive modeling framework applicable to molten salt environments. The results highlight FESTIM’s value in interpreting experimental results, constraining uncertain parameters, and supporting the design of tritium-breeding blankets. More broadly, they emphasize the importance of combining experimental measurements with computational modeling to refine material property characterization and reduce uncertainties in future blanket-relevant studies.

        Speaker: Huihua Yang (MIT PSFC)
      • 41
        Integration of Customized MHD Modeling into a Digital Engineering Workflow for Advanced Liquid Metal Blankets

        The advancement of digital engineering workflows is critical to accelerating the design, validation, and deployment of next-generation fusion energy systems. In this context, we present the integration of a customized magnetohydrodynamics (MHD) modeling capability into a general-purpose computational fluid dynamics (CFD) framework, enabling scalable, high-fidelity simulations of liquid metal (LM) blanket concepts under realistic fusion reactor conditions. Traditional MHD codes often fall short in handling complex 3D geometries and multi-physics environments characteristic of fusion applications. By modifying ANSYS CFX, we extend its capabilities to simulate MHD flows subjected to high Hartmann numbers, MHD turbulence, and variable material conductivities, while coupling with thermal, structural, and electromagnetic loads.
        This work demonstrates how digital engineering platforms can unify design, analysis, and validation workflows. The customized CFX solver was validated against experimental and analytical benchmarks for MHD pipe flows across a range of wall conductance ratios and magnetic field strengths. We apply this tool to simulate Kyoto Fusioneering’s silicon carbide composite (SiC_f/SiC) LM blanket concept, incorporating realistic plasma-facing heat loads, neutronic volumetric heating, and spatially varying magnetic fields. The simulation outputs are being used to inform instrumentation placement and experimental design for mock-up testing at the UNITY-1 blanket test facility.
        Our approach highlights the potential of digital engineering to integrate advanced physics modeling, experimental validation, and system-level design in a unified environment. The outcome is a faster, more reliable pathway to evaluate fusion blanket technologies and accelerate the transition from concept to testing within the fusion development timeline.
        This work is in-part supported by the United States Department of Energy through the INFUSE program.

        Speaker: Andrei Khodak
    • Data Techniques
      • 42
        An ML-based design approach for Fusion Energy targets & components

        Commercialization of fusion technologies, ranging from fusion power plants to fusion-powered propulsion systems, requires advanced design and engineering capabilities that tightly integrate physics, materials, and manufacturing processes. Traditionally, the complexity of design and the complexity of manufacturing have been treated separately due to the limitations of existing digital tools and the processing capability of the human mind. This disconnected approach results in sequential, and often repeated, workflows that increase costs and extend project timelines.
        One of the principal cost drivers in fusion systems is the need to design and produce components at scale with the requisite uniformity and performance reliability. Recent advances in advanced manufacturing - such as additive manufacturing techniques and architected materials, demonstrate significant potential but remain limited by process variability and the lack of fully integrated digital frameworks.
        In this presentation, we present a multi-objective design optimization and digital twin framework that employs scientific machine learning (ML)-based process modeling. Our algorithm couples physics-based AI models such as Fourier neural operators (FNOs) and physics-informed neural networks (PINNs) that simulate structural, thermal, fluidic, and electromagnetic behaviors with experimental manufacturing data across multiple fusion-relevant components. These ML surrogates serve as computationally efficient replacements for conventional solvers, enabling real-time predictions of performance and manufacturability. This approach not only accelerates design exploration but also supports adaptive manufacturing control, where feedback from production is used to refine both the digital twin and the physical part simultaneously. The application of this approach has been discussed using the IFE target and stellarator plasma vessel as examples. Additional applications in other domains are also discussed.
        Such integrated frameworks are critical for enabling scalable manufacturing of fusion energy targets, high-performance components for fusion power plants, and novel applications such as fusion-powered propulsion systems. More broadly, this work points toward a path where scientific AI accelerates the co-design of fusion technologies—linking physics, manufacturing, and performance—to reduce costs, shorten timelines, and expand the reach of fusion into energy, aerospace, and defense applications.

        Speaker: Vignesh Perumal (CAMINNO)
      • 43
        Multiphysics Modeling of Fusion Energy Systems Using the Software for Advanced Large-scale Analysis of MAgnetic confinement for Numerical Design, Engineering & Research (SALAMANDER) Computational Tool

        Addressing the fusion science and technology development needs of the accelerating fusion energy deployment timeline requires a deep understanding of the interaction between materials and the complex physics and engineering processes in the extreme fusion environment. Plasma facing materials, for example, are subject to extreme thermal loads, repeated thermal shocks, and bombardment by 14 MeV neutrons, plasma ions, and neutral particles (deuterium, tritium, and helium), whose interactions with material and system performance are complex. Advanced multiphysics modeling and simulation capabilities offer great potential for accelerated scientific and engineering studies through a high-fidelity, multiscale approach resolving these convoluted interactions. The open-source Software for Advanced Large-scale Analysis of MAgnetic confinement for Numerical Design, Engineering & Research (SALAMANDER) framework addresses these needs by leveraging the Multiphysics Object-Oriented Simulation Environment (MOOSE) framework to model the environment and response of plasma facing materials. SALAMANDER integrates various MOOSE capabilities, including the heat transfer, solid mechanics, and thermal hydraulics modules, along with MOOSE-based applications such as Cardinal (neutronics and computational fluid dynamics) and TMAP8 (Tritium Migration Analysis Program, version 8). SALAMANDER also supports dedicated plasma edge modeling capabilities for plasma-material interaction simulations. By capturing these physics, SALAMANDER can accurately predict the fusion environment and simulate component and system performance.
        We will introduce SALAMANDER as a MOOSE-based Multiphysics capability and present its current capabilities. We will describe an example case for a divertor monoblock focusing on interactions between its environment and component performance, as well as future plans focused on expanding ongoing TMAP8 efforts to higher fidelity through multiphysics simulations. This example will highlight SALAMANDER’s modular design and how it enables collaborations for massively parallel multiphysics simulations. SALAMANDER is being developed following the INL Software Quality Assurance plan PLN-4005, ensuring rigorous verification, continuous integration and adequacy for purpose. Its open access, on the other hand, enables a community-driven development and wide usage. These features support SALAMANDER in advancing fusion research, design, and performance evaluation.

        Speaker: Pierre-Clément Simon (Idaho National Laboratory)
      • 44
        A Cross Machine and Parametric Neural Operator Surrogate Model for MHD Simulations

        High-fidelity nonlinear MHD simulations, such as those performed with the M3D-C1 code, are essential for understanding plasma instabilities and disruption dynamics but remain prohibitively expensive for optimization tasks and large-scale parametric studies. We present a neural operator-based surrogate model that enables both cross-machine generalization and parametric extrapolation, offering a computationally efficient alternative for disruption prediction and mitigation research. The surrogate model advances the system by taking the field functions and source terms at the current time step and predicting their evolution for the next time step. To achieve device-independent generalization, field functions are mapped from physical space into a normalized computational domain via conformal mapping, enabling geometry-agnostic training across fusion devices. Parameter extrapolation is achieved using an equation-recast strategy, which treats deviations between training and target parameters as perturbation sources. This allows accurate extrapolation while strictly preserving physics constraints, significantly reducing data requirements. Training on a single plasma state is sufficient for the surrogate to generalize across diverse parameter configurations. The developed surrogate can be integrated into plasma instability control systems for rapid state prediction or integrated into the M3D-C1 solver as a preconditioner to accelerate convergence. This cross-machine and parametric surrogate model provides a reliable and interpretable pathway for advancing MHD simulations.

        Speaker: Qiyun Cheng (Massachusetts Institute of Technology)
    • 10:25
      Coffee Break
    • Simulation and Modelling Techniques
      • 45
        Data-Driven Prediction of Parametric Sensitivities of Stellarator Blanket Performance

        Design optimization of stellarator blanket shapes is a high-dimensional, computationally expensive black box problem. Gradient-based optimization methods are well suited to find optimal solutions for this problem efficiently, when focusing on neutronic and basic economic metrics. However, due to the lack of spatial derivative information in the Monte Carlo radiation transport kernel, gradients must be estimated for each design point. For such a high-dimensional application, these gradients are themselves computationally expensive to estimate directly if using a method such as finite difference. To alleviate compounding computational expense in a proposed gradient-based approach, a machine learning model can be trained to predict parametric sensitivities at a design point. The parametric sensitivity field can then be used in place of gradient information. Using intelligent design of experiments, an efficient set of training data can be generated to reduce the total cost of this surrogate-assisted approach below that of a purely gradient-based approach. This presentation focuses on the training and validation of a parametric sensitivity model using gradient boosted regression trees.

        Speaker: Connor Moreno (University of Wisconsin - Madison)
      • 46
        Tools to Support Geometry and Meshing Needs for Fusion Energy System Simulation Codes

        Accurate simulations of the systems being designed and build by the fusion energy companies require consideration of the complex geometries of the system components. Tools being developed to support fusion energy system simulation workflows steps including (i) analysis geometry construction for general 3D configurations, (ii) fully automatic generation of well controlled meshes and adaptive mesh control to ensure simulation result fidelity, (iii) high level tools to support fusion physics analysis will be presented. The geometry construction tools support CAD geometry clean-up and defeaturing, construction of analysis domain geometry combining CAD and physics geometry. The automatic mesh generators support creation of graded anisotropic meshes on arbitrary geometric domains and include specialized mesh generation tool used by specific tokamak and stellarator simulation codes. Adaptive mesh control is support by fully parallel procedures for mesh refinement and coarsening that can be directly coupled into existing simulation codes. Massively parallel unstructured mesh based particle simulations are supported by a distribute mesh infrastructure that can scaling both the particles and mesh. The integration and use of these tools into fusion energy system simulation codes will be demonstrated.

        Speaker: Mark Shephard (Rensselaer Polytechnic Institute)
      • 47
        Fusion Energy System ARC System Modeling using SAM

        As the push to deploy fusion energy systems continues through public and commercial initiatives, the determination of the design and accident scenarios figures of merit with the highest influence on design and safety is required. Current fusion energy system designs involve either solid or liquid blanket systems that serve the purpose of tritium (fuel) management, neutron multiplication, and heat removal for power conversion. One concept, involving the use of fluoride based molten salts in fusion breeder blankets has become a potential option. This is the Affordable, Robust, and Compact (ARC) energy system that involves a liquid immersion blanket (LIB) design for fuel cycle and heat management. LIBs involve a molten salt (i.e. FLiBe) as the working fluid, coolant, and fuel source in the heat transport system that removes heat from the plasma facing components and blanket components. At the time of writing, a significant gap exists in both systems and component level information for LIBs needed for open-source research and development of different enabling technologies.

        To address these gaps, the authors have investigated FLiBe based LIBs using system & component modeling with MOOSE based tools and experimental investigations. The modeling work involves using the US Department of Energy’s MOOSE based System Analysis Module (SAM) code developed by Argonne National Laboratory. Leveraging the capabilities for modeling molten salt reactors in SAM, the system behavior of a prototypical ARC LIB has been analyzed for start-up, shutdown, steady state, and pump failure transients for the heat transport systems. For each scenario, the potential for structural failure of plasma facing components and heat transport system components is analyzed to determine inappropriate design choices. In the reported work, we will discuss both operational transients and accident scenarios modeled using SAM and how future coupling with MOOSE tools will be useful for fusion energy system design. Additionally, the authors will discuss how the data produced from this study has provided a foundation for component level studies using other DOE NEAMS tools and experiment efforts.

        Speaker: Lane Carasik (Virginia Commonwealth University)
      • 12:10
        Wrap-up / Discussion