Description
The integration of Artificial Intelligence (AI) with Uncrewed Ground Vehicle (UGV) presents a promising approach to enhance radiation detection capabilities. This paper presents preliminary work conducted under the Coordinated Research Project (CRP) J02018, “Nuclear Security Implications of Uncrewed Aerial, Ground, and Maritime Systems.” The Malaysian project aims to develop a proof-of-concept prototype that integrates AI-based image and video analysis with real-time radiation detection and parameters estimation for both autonomous and teleoperated object inspection using UGV. The system is designed to support detection operations in scenarios such as the deliberate concealment of high-activity radioactive sources inside industrial containers or common objects. By enabling intelligent object-level assessment, the system contributes to early threat detection while minimizing exposure to frontline personnel.
The system architecture is centered around a UGV control unit that integrates inputs from a camera, a compact gamma radiation detector system, and navigation sensors. A custom AI model will be used to classify inspected objects and determine whether they contain a radioactive source. If a source is detected, the model will further predict the radionuclide type, estimated intensity, and its location within the object. This AI model is structured into two components: (1) object identification and (2) radiation parameter characterization and prediction, both currently under independent development.
The object identification model is being developed using images from public datasets. A customized dataset has been constructed with five object classes—luggage, backpack, parcels, boxes, and small containers—representing common items that may conceal radioactive sources. The dataset currently includes approximately 500–1000 annotated images per class and is designed to be scalable, allowing for the addition of new object types in the future. The model is trained using the Ultralytics YOLOv11 framework, with Python and OpenCV for implementation and visualization. The next step involves integrating the object identification model with a Simultaneous Localization and Mapping (SLAM) system and Robot Operating System (ROS) to enable spatial estimation of identified objects for autonomous inspection by the UGV.
In parallel, the project extends earlier work on radiation parameter estimation, where a particle filter-based algorithm was previously implemented to estimate 2D source position and intensity. Current efforts focus on incorporating AI-based methods by designing source–detector geometries to simulate inspection scenarios and generate synthetic datasets for training.
Key national stakeholders have been identified for project engagement, including the Royal Malaysian Customs Department, Department of Atomic Energy, Royal Malaysia Police, Fire and Rescue Department of Malaysia, and Aviation Security (AVSEC) or airport auxiliary police. These agencies are involved in radiological object inspection across various operational settings, including border checkpoints, transportation hubs, critical infrastructure, and material handling areas. Courtesy visits are being arranged to present the project, discuss operational scenarios, conduct site visits, and explore potential field experiments. Insights from these engagements will directly inform the development of threat models and define the system’s technical and operational requirements.
Finally, the project is at an early stage, and the team welcomes technical feedback and collaboration as the work progresses.