Harnessing the Power of Machine Learning in an Accelerator Simulation Toolkit for
Optimization and Automated Operation
Georg Hoffstaetter (PI), Cornell University & Brookhaven National Laboratory
Qi Tang (Co-PI), Los Alamos National Laboratory
Dan Abell (Co-PI), RadiaSoft LLC
Axel Huebl (Co-PI), Lawrence Berkeley National Laboratory
Machine Learning (ML) and Artificial Intelligence (AI) are in a state of revolutionary and rapid development; increases in efficiency and the impact on modern lives can be observed nearly daily. ML/AI is also having a large impact on particle accelerator operations and optimizations. Yet hitherto, no comprehensive accelerator simulations toolkit has been designed with these modern powers in focus. ML/AI can provide advanced optimization tools that are uniquely suited for many accelerator tasks, e.g., physics-informed Bayesian Optimization addresses tasks where measurement uncertainties are relevant or where obtaining measurements is time-consuming or expensive. Additionally, the construction of ML-based surrogate models has shown powerful speedups when describing hard-to-model sections of accelerators, e.g., for space charge-dominated beams where the interaction of billions of particles must be considered. ML-based determination of accelerator parameters has provided improved accelerator models for digital twins that simulate the accelerator performance in parallel to the operating machine. The fast, ML-based surrogates add the capability for real-time simulations in the digital twin.
Withing this project we design, produce, test, and fully document a new software product that (a) is amenable to modern ML/AI tasks, e.g., by being backward differentiable to be usable for the training of physics-informed neural networks (PINNs); (b) is equipped with ML/AI optimization tools for the design of future accelerators and the construction of digital twins of existing accelerators; (c) while being a new code, it is inspired by Bmad, a broad, successful, and well-utilized toolkit for accelerator simulations that has already been used for ML/AI applications and for digital twins; (d) is focused on efficiency concerns, e.g., being written in a modern language that leverages improvements in computing hardware (e.g., GPUs) and harnessing the power of parallelization: (e) uses standards-conforming data formats to communicate with other simulation programs; (f) is written by an experienced software team with a track record of serving large accelerator efforts by providing Bmad to many user teams, with strong support, tutoring, user schools, and developer meetings.; and (g) the software will serve as the basis for a unified set of simulation toolkits which will both cut down on the time needed to develop simulations for future accelerator modeling (including non-ML/AI modeling) as well as increase the reliability of such programs. The software developed here has the potential to have an enormous impact on accelerator simulation modeling at all levels and even an impact for fields other than accelerator physics.
These plans are laid out with the following objectives:
1. Develop software for accelerator simulations that enable modern ML/AI tools and neural networks: Develop Bmad-Julia as an ecosystem of codes, libraries and frameworks that are interoperable via open community data standards. Specific packages include Accelerator lattice instantiation and manipulation, fast truncated power series manipulation, analysis routines for analyzing TPSA maps, calculating Twiss parameters, tracking particles and maps through a lattice, etc., Software will be developed with an eye for execution speed (EG tracking with GPUs), and adherence to I/O standards to promote interoperability.
2. Develop ML/AI/Digital Twin packages: Create Bmad-Julia packages for ML/AI simulations, including backwards auto-differentiation and the use of Neural Networks as accelerator elements with network parameters as differentiable quantities. Produce a new accelerator simulator optimized for the use of ML by (a) using backward auto-differentiation for physics-informed NN optimizations, (b) optimized communication to ML optimizer packages. This also includes the development of a generic EPICS interface for machine communication.