Since 18 of December 2019 conferences.iaea.org uses Nucleus credentials. Visit our help pages for information on how to Register and Sign-in using Nucleus.

Consultancy Meeting on the Preparation of a Major FENDL Release

Europe/Zurich
Description

The objective of this meeting is to review progress of FENDL-related activities and discuss preparatory steps towards the next major release of the library.

    • FENDL, General
      • 1
        Opening
        Speakers: Arjan Koning, Georg SCHNABEL (IAEA), Roberto Mario Capote Noy (IAEA NAPC-NDS)
      • 2
        Overview of FENDL: Library, processing methods and tools

        The Fusion Evaluated Nuclear Data Library (FENDL), coordinated by the IAEA’s Nuclear Data Section, is a key resource for nuclear data in fusion energy applications. The project involves a broad international collaboration to evaluate, process, validate, and document nuclear data relevant for both Monte Carlo and deterministic simulations.
        Recent developments have led to the preparation of FENDL-3.2c, which supersedes FENDL-3.2b. Main updates include revised evaluations for neutron, proton, and deuteron sub-libraries. For incident neutrons, tungsten isotopes were adopted from a corrected version of the ENDF/B-VIII.0 library, while Li-7 for incident protons was taken from JENDL-5. Additionally, some erroneous data were removed from several evaluations for incident deuterons.
        The data processing methodology was also updated. All evaluations - regardless of incident particle - are now processed using NJOY2016.74 with custom NDS/IAEA patches. Enhancements to NJOY2016 modules, such as HEATR, PURR, ACER, and RECONR, address issues in heating, damage calculations, and probability table generation in the unresolved resonance region.
        Extensive verification and validation (V&V) activities confirmed the quality of FENDL-3.2c through comparisons with experimental and computational benchmarks. The library is freely accessible via the NDS/IAEA website. To support users, the project has developed open-source tools, including endf-parserpy and endf-userpy, which facilitate flexible data processing, visualisation, and analysis.
        Looking ahead, the FENDL developers are preparing for a major release by integrating new evaluations, extending activation and covariance data, incorporating thermal scattering laws, and enhancing the automation of the V&V process. These efforts aim to ensure that FENDL continues to support the evolving needs of the fusion community with high-quality, reliable nuclear data.

        Speaker: Daniel Lopez Aldama
      • 3
        To the next major FENDL

        We reviewed some issues presented at the 2023 FENDL meeting and summarized the requests for the next FENDL release about Iron, Copper, Beryllium data. Also we examined an issue which F4E group encountered in analyzing FNG tungsten experiment, which suggests that a lot of files in FENDL-3.2c include data which NJOY does not support.

        Speaker: Saerom Kwon
    • Evaluations
      • 4
        Recent Advances in Deuteron Nuclear Data Evaluation for Structural Materials: Iron Isotopes

        Since iron is one of the primary structural materials used in accelerators, to evaluate radioactivity production and design radiation shielding for the International Fusion Materials Irradiation Facility (IFMIF) and its precursor facilities, accurate nuclear data on deuteron-induced reactions for iron are essential.
         These data include production cross sections of key radioactive isotopes such as Mn-54, Co-56, and Co-57, as well as energy and angular distributions of emitted neutrons.
        However, high-precision deuteron nuclear data for iron isotopes have been lacking until recently.
         In this study, we evaluated deuteron-induced nuclear data for the stable isotopes of iron (Fe-54, Fe-56, Fe-57, and Fe-58).
        The resulting evaluated data successfully reproduce not only the residual nucleus production cross sections but also the energy spectra of emitted neutrons.
         In this presentation, we provide an overview of the evaluation methodology and discuss the key results obtained.

        Speaker: Shinsuke Nakayama
    • Lunch break
    • Evaluations
      • 5
        Recent updates to the evaluated nuclear data files of structural materials
        Speaker: Andrej Trkov
      • 6
        The Evaluated Nuclear Reaction Data for Fusion at CNDC

        Fusion energy serves as an alternative to fossil fuels, providing zero-carbon operation and virtually inexhaustible fuel supplies. CENDL has not yet established a special evaluated data library for fusion, while already includes some nuclei required by the fusion equipment. The talk includes the brief explanation of research work related to the CENDL project, Chinese nuclear reaction data platform, the theoretical nuclear reaction models, covariance evaluations and the latest AI and nuclear data evaluation techniques are introduced. The Chinese nuclear data improvement covers from neutron-proton scattering to fission in the coming CENDL. Neutron data of deutron, Carbon-13, isotopes of Cr, Fe, U et al.related to fusion have been updated and presented in the presentation. Beyond the stable elements, some unstable isotopes of S and Cl et al. are also new evaluated in the CEDNL. At the same time, the covariance of nuclear reactions about more than one hundred nuclei will be provided via the deterministic least square approach and the Unified Monte-Carlo approach.
        This work was jointly performed by the staff of China Nuclear Data Center. We would like to thank the contributions from China Nuclear Data Collaboration Network. Meanwhile, gratitude is also extended to A.Koning, R.Capote, P.Dimitriou, N.Otsuka and G.Schnabel et al. from the Nuclear Data Section of the International Atomic Energy Agency (IAEA).

        Speaker: Ruirui Xu
      • 7
        Evaluation of Cross Sections for Fast Ion Reactions with Beryllium in Helium and Hydrogen Fusion Plasmas
        Speaker: Jan Malec
      • 8
        Improving nuclear cross-sections with deep learning: DINo algorithm

        The DINo (Deep learning Intelligence for Nuclear reactiOns) algorithm is a deep neural network designed to improve predictions of nuclear reaction cross-sections, crucial for applications like particle therapy in cancer treatment. Trained on TENDL 2021 data, DINo significantly outperforms traditional models, especially for proton–carbon interactions, achieving better agreement with experimental data. It is efficient, delivering predictions within microseconds, and demonstrates strong generalization, even in data-scarce energy ranges. DINo holds promise for real-time applications and future extension to a broader range of nuclear reactions.

        Speaker: Lévana Gesson
    • Benchmarks
      • 9
        Benchmark experiment for large angle scattering cross section of 14 MeV neutrons.

        The scattering cross section at large angles varies depending on the nuclear data library. To validate these cross sections, we developed a benchmark experimental system and conducted measurements for several elements, including iron and tungsten. For iron, calculations based on the ENDF/B-VIII.0 showed good agreement with the experimental results. In contrast, for tungsten, JENDL-5 and JEFF-3.3 provided better agreement, although all libraries appear to underestimate the experimental values.
        Currently, efforts are underway to reduce the statistical uncertainty in benchmark experiments for lithium. To address this issue, a new activation foil was selected. Magnesium was found to be the most suitable candidate for reducing the error; however, it is still insufficient for achieving the precision required for benchmarking.
        In future work, we plan to develop an improved experimental system by optimizing the materials and configurations of the surrounding components. Using this enhanced setup, benchmark experiments for lithium will be carried out with reduced statistical uncertainties.

        Speaker: Yamato Fujii
      • 10
        Tungsten leakage experiment in Rez
        Speaker: Michal Košťál
      • 11
        Recent and future benchmark experiments at the Frascati Neutron Generator (FNG)
        Speaker: Rosaria Villari
      • 12
        FNG Tungsten benchmark analysis using pointwise and multigroup FENDL 3.2 data
        Speaker: Ivan Kodeli
    • Lunch break
    • Verification & Validation
      • 13
        INDEN activities for FENDL: Fe, Ni, W, F
        Speaker: Roberto Mario Capote Noy (IAEA NAPC-NDS)
      • 14
        FENDL3.2c: V&V and impact over ITER analysis

        The contribution provides a comprehensive overview of the FENDL3.2c nuclear data library, its verification and validation (V&V) process, and its impact on ITER analysis.

        FENDL3.2c is the most up-to-date nuclear data library recommended by the IAEA for ITER applications. It includes transport data and activation data from TENDL-2017. The V&V process for FENDL3.2c is extensive, involving computational and experimental benchmarks. The library has resolved issues in energy/angle KERMA/DPA cross-sections and has been confirmed through benchmarks. The JADE V&V tool, driven by F4E and UKAEA, plays a significant role in the V&V process.

        FENDL3.2c has been shown to perform better overall compared to previous versions and other libraries. The library has been tested in various experimental setups, showing improved agreement with experimental data for materials like Ni, Cu, Fe, and W. Differences in results between FENDL3.2c and other libraries are noted, particularly in photon flux and SDDR (Shutdown Dose Rate) calculations.

        FENDL3.2c offers better overall performance and coverage of pathways and elements compared to previous versions. The library has been approved for use in ITER, although formal confirmation from the IAEA is pending. There is a need for more SDDR experiments and further V&V coordination for future releases.

        It is also recommended the need for a new activation library for FENDL, focusing on main isotopes for neutron transport and the provision of HDF5 nuclear data for OpenMC calculations.

        Speaker: Marco Fabbri
      • 15
        Nuclear data validation and verifications for IFMIF-DONES
        Speaker: Yuefeng Qiu
      • 16
        Application of the TUD-W Benchmark for Nuclear Data and code Validation

        Authors: Marta Campos Fornés, Marco Fabbri*, Luigino Petrizzi, Aljaz Kolsek, Davide Laghi

        The presentation focuses on the validation study of tungsten nuclear data using the TUD-W SINBAD benchmark. The study involves the use of two different transport codes, MCNP 6.2 and OpenMC 0.15.0, to compare spectral neutron and photon flux at various depths from a neutron source emitting almost isotropic neutrons around 14 MeV. The conversion of the benchmark model from MCNP to OpenMC is discussed, including the translation of geometry and materials, and the definition of tallies according to benchmark documentation. The analysis of neutron and photon spectra results is presented, highlighting the agreement between experimental data and simulations, as well as the discrepancies observed in high-energy photons. The investigation of these discrepancies and the role of impurities in DENSIMET are also covered. The presentation concludes with key outcomes and possible future work, emphasizing the validation of nuclear data, the conversion of the experimental benchmark model to OpenMC format, and the need for further studies to assess impurity levels and understand the importance of different reaction channels.

        Speaker: Marco Fabbri
      • 17
        Ranking of iron nuclear data performance for fusion application using a suite of benchmark experiments

        A suite of Fe benchmark integral experiments with D-T, Cf-252 and fast fission neutron sources from the IPPE, Rez and ASPIS facilities shall be presented and used to rank FENDL-3.1d and FENDL-3.2c libraries according to their performances in MCNP computational models compared to experimental data. Different metrics for the evaluation of Computational-Experimental difference shall be assessed via the JADE code post-processing features.

        Speaker: Alberto Milocco
    • Nuclear Data Needs and Applications
      • 18
        Simulation of the Radiation Environment at the National Ignition Facility
        Speaker: Hesham Khater
    • Information Technology, Tools, Codes
      • 19
        JADE v4, a more robust and expandable architecture for neutronics V&V

        In the last couple of years, JADE development has been pushed forward by Fusion For Energy and UKAEA with a new objective: use the JADE framework also to perform code to code comparisons. This new collaboration led to JADE v3, a first proof of concept that ported the tool on the linux platform and allowed for the first time to run benchmarks not only with MCNP but also with OpenMC. During this phase, the JADE environment was restructured and components with different scopes were made independent and separated in different repositories. Despite all these achievements, it became clearer that adding new features to JADE was becoming increasingly harder because JADE was not initially conceived with this broader scope in mind. The code base reached a high degree of complexity and was becoming a barrier for the onboarding of new developers in the project. These reasons led to the work presented here, which focus on the latest iteration of JADE development: JADE v4. The entire core architecture has been refactored following two main principles. The first is that JADE is conceived now from the very beginning to be a framework for comparison of code-library results and not simply for library to library or for code to code. The second is that the csv data produced by JADE, which are essentially the results of the different simulations in a table format, will be now a key interface. That is, all post-processing that is transport code dependent will end with the production of the csv. This allows the JADE post-processing (plots and excel summaries) to be completely transport code independent. A side benefit of this is that a better interface is created for the JADE web app. Moreover, this will allow to expand the JADE benchmark suite only through configuration files and input templates, without the need for additional coding. Finally, the extensive refactoring also allowed to implement a series of software design best practices which significantly increased JADE robustness and expandability.

        The following is a list of the main achievements reached by JADE v4:

        • Re-think JADE to be a code-lib 2 code-lib comparison. Not code2code or lib2lib.
        • Full implementation of OpenMC (for the available benchmark inputs)
        • Implement a structure that clearly isolates (by inheritance of specific abstract classes) the extra python code needed to support an additional transport code.
        • No additional programming for new benchmarks (including experimental). Everything is handled by ad hoc YAML configuration files.
        • Raw data (csv files) are now a stronger interface. They are the end point of transport code-dependent processing and the starting point of the JADE plotting and excel. This facilitates the integration with 3rd party post-processors like the WebApp.
        • Software design best practices were applied during refactoring.
        • The codebase (number of lines) has been reduced to only 1/3 with respect to v3.1.0 one.
        Speaker: Davide Laghi
      • 20
        Application of JADE as complete tool for automated nuclear data and particle transport code validation
        Speaker: Alex Valentine
      • 21
        The nuclear data arsenal and new approaches
        Speaker: Thomas Stainer
      • 22
        Modernizing Experimental Nuclear Data Access: The IAEA Nuclear Reaction Data Explorer

        The use of experimental nuclear reaction data in the EXFOR format~\cite{EXFORFormatsManual2015} and evaluated nuclear data in the ENDF-6 format~\cite{ENDF6FormatsManual2018} continues to be hindered by challenges in preprocessing and integration with modern computational workflows. These limitations pose significant barriers to applying advanced data analysis, modeling, and machine learning techniques in nuclear data research.

        To address these challenges and modernize access to such data formats, the IAEA Nuclear Data Section has developed the IAEA Nuclear Reaction Data Explorer, a comprehensive platform designed to support easy access to nuclear reaction data. Since its release in 2021, the system underwent a major update in March 2024, featuring:

        • Integration of EXFOR data via the open-source EXFOR Parser~\cite{Okumura2024, EXFORParser}
        • A redesigned EXFOR entry interface for enhanced user interaction
        • Updated ENDF-6 datasets sourced from ENDFTABLES
        • A new suite of RESTful APIs to enable programmatic access and workflow automation~\cite{dataexplorer}

        These developments align with the FAIR (Findable, Accessible, Interoperable, Reusable) data principles and the guidelines established in SG50~\cite{SG50-2023}, which emphasize open data and tools for community-driven science. The EXFOR Parser enables conversions of wixed-width formatted EXFOR data into JSON, which is further transformed into tabulated $(x, y, dx, dy)$ stored as ASCII test files and in a SQL database. This process ensures compatibility with modern scientific computing environments.

        The renewed DataExplorer platform offers interactive web-based visualization and supports reproducible data access through its APIs. It provides a foundational step toward interdisciplinary use of nuclear data across physics, data science and applications.

        This presentation will outline the features and system ecosystem. We will also discuss ongoing and planned developments to further modernize and expand the system’s capabilities for the global nuclear data community including FENDL community.

        Speaker: Shin OKUMURA (IAEA)
    • Lunch break
    • Information Technology, Tools, Codes
      • 23
        Making FENDL interplanetary
        Speaker: Georg Schnabel
      • 24
        TALYS and TENDL for the future of FENDL

        A short review of the current status of the TENDL nuclear data library will be given, with a few examples of relevance for fusion.
        TENDL is the result of so-called horizontal nuclear data evaluation.
        The aim is that each biennial releases of the TENDL library lead to closer agreement with differential and integral experimental data.
        In principle, each target isotope is treated as equally important in a global approach, although important nuclides with a lot of experimental data obviously require more evaluation effort.
        Over the years, the isotopic evaluations of TENDL become more and more competitive with existing evaluations in the various world libraries,
        while the latter have not been created by a reproducible approach. For projectiles other than neutrons, ENDL is often a reference library.
        Some ideas for the future development of TENDL will be outlined.

        Speaker: Arjan KONING (IAEA)
    • Nuclear Data Needs and Applications
    • Discussion, Actions, Recommendations, Report drafting