This workshop will bring together experts in the field of Digital Twinning (DT), Machine Learning (ML) and Virtual Diagnostic (VD) who apply these methods to the optimisation of accelerator-based light sources (rings/linacs/compact) towards resilient & autonomous operation and address the full facility from source to scientific instruments. The workshop is open to all LEAPS members; colleagues from external laboratories may participate only by invitation.
The objectives of the workshop are :
- set up a detailed survey of the ongoing activities within LEAPS on DT, ML & VD
- draw up a summary document which will constitute the cornerstone of the LIP project.
Workshop organising committee :
Thomas Tschentscher - EUXFEL
Simone Di Mitri - ELETTRA
Marco Calvi - PSI
Eugenio Ferrari - PSI
Jens Osterhoff - DESY
Rainer Wanzenberg - DESY
Pierre Schnizer - HZB
Andreas Jankowiak - HZB
Pedro Fernandes Tavares - MAX IV
Pavel Evtushenko - HZDR
Francis Perez - ALBA
In the name of the organising committee, I would like to welcome you to this workshop. I think I owe to some of you few introductory words. LEAPS is a strategic consortium created in end of 2017 by the Directors of the Synchrotron Radiation and Free Electron Laser user facilities in Europe. During the last year and triggered partially by the new situation created by the sanitary crisis, Helmut Dosch suggested to move faster on the process of digitisation of our facilities. This initiative is now called Digital LEAPS and the “platform” is one of the three projects recently approved by the assembly of our directors on April this year. Shortly, a position paper will be available to support all LEAPS members with their respective National Founding Agencies. In parallel, we are working on a proposal for a EC call and Thomas Tschentscher will contact some of you in the coming weeks to check your availability. The Platform started last October as an initiative of WG2 to collect ideas and suggestions and a first draft for a collaboration was sketched out. Today, we are here with experts from all over the world, also from NON LEAPS facilities to move into the core of this initiative, to get more familiar with its three main subjects : DT, ML and VD. This workshop is designed to help people start working on these topics; to get a survey of the ongoing activities in the labs which are more advanced; to guide us to choose the best practice for a prototype “platform” which could be wired to our facilities, in term of hardware, protocols, software engineering, codes etc.; and last but not least to come up with a final project description. At the end of this workshop it should be easier to answer questions like :
1. How the digital twin of our facility can be use to improve its performance?
2. What shall we expect from ML?
3. Where virtual diagnostic is different and superior with respect to more standard diagnostic methods?
Digital Twin: Beyond the buzz word. A light-sources perspective.
The Digital Twin (DT) is the next step of the digital transformation. Ideally, this virtual representation of a device, a process or a whole facility, behaves as its physical equivalent. However, its capabilities go far beyond the imitation game. It is usually first and foremost developed to contribute to manage the overall life cycle of the real entity: from design to operation and dismantling. It often knows the past, the present and even the future of the real entity. Using both physics-based and data-driven models, the Digital Twin provides scientists and engineers with advanced simulation and prediction capabilities on which smart operation and maintenance can be implemented.
This presentation will provide an overview of the Digital Twinning concept and related technologies. It will be followed by a presentation of the ESRF-EBS simulator – which constitutes the first step towards a Digital Twin.
Contributed talks, discussion
Beam Instrumentation Simulation System (BISS) is an educational software aimed at Advanced CERN Accelerator School (CAS) Beam Diagnostics hands-on course. The aim of this course is to give basic understanding of the measurement principles, fundamental concepts and related technological aspects of deriving beam position in a particle accelerator. An interactive simulation tool will allow a user to construct his own acquisition system and virtual monitors to generate beam signals. The BISS v2 will be released later this year; it is based off its v1 and developed under collaboration of ALBA Synchrotron and CERN.
PETRA IV is the diffraction-limited synchrotron light source currently under design at DESY Hamburg, that will deliver hard x-ray beams of unprecedented brightness.
The design phase of the storage ring is centred around the electron optics model, from which requirements and parameter lists for hardware such as magnets, power supplies, alignment system etc. are generated. The computer models of these systems are fed back into beam dynamics studies to assure the machine steering and optics correction algorithms are capable of dealing with the expected alignment and field errors. Mechanical information on girders, tunnel movements, and yearly temperature variations in the tunnel is also taken into account. The comprehensive model is essential for performance evaluation of the machine. We are starting to design new software tools which will integrate the comprehensive beam dynamics model and allow testing and evaluation of controls algorithms and software components.
Such digital twinning will be essential for timely commissioning and future operation of the machine.
This presentation introduces the HELIPORT project, which aims at developing a platform which accommodates the complete life cycle of a scientific project and links all corresponding programs, systems and workflows to create a more FAIR and comprehensible project description. Heliport is linked with our local Handle-Server and generates uniform PIDs from and for various systems and services. With the integration of the Handle system (handle.net) Heliport can support digital twins.
Contributed talks, discussion
Experiments conducted in large scientific research infrastructures, such as synchrotrons, free electron lasers, and neutron sources become increasingly complex. Such experiments, often investigating complex physical systems, are usually performed under strict time limitations and may depend critically on experimental parameters. To prepare and analyze these complex experiments, a virtual laboratory that provides start-to-end simulation tools can help experimenters predict experimental results under real or close to real instrument conditions. Such a tool should be able to show the effects of experimental parameters on the final experimental results and help educate PhD students and new staff. In this presentation, we introduce the design and current status of SIMEX, a platform to perform start-to-end SIMulations of EXperiments at XFEL facilities.
This contribution gives an overview of two activities regarding "digital twins" of the European XFEL accelerator. The first one is the operation and enhancement of the Virtual XFEL, a clone of the machine control system for testing mid- and high-level software. Using a custom physics simulation written in C++, the Virtual XFEL performs single-particle and envelope tracking in realtime. The second part of the presentation gives an overview of start-to-end simulation methods that attempt to reproduce the physics of electron beam transport in much more detail at the cost of increased computation time.
Digital twins of highly-nonlinear, time-dependent complex systems require to include knowledge on the initial experimental conditions, all relevant detection modalities and the complex system itself. In the case of plasma accelerators, these systems can be seen both as compact accelerators and complex systems to study. In both cases, they require considerable computational power to reach predictive capabilities compared to experiments. With the help of AI invertible surrogate models for specific cases, compute requirements can be significantly reduced while being able to make reasonable accurate predictions and allow for fast inference. In the end, we need to interconnect open solutions to some of the most pressing problems in fast and accurate data analytics and simulations to allow for near real time feedback from digital twins to the real machine and experiment. This talk presents some of the building blocks needed for that.
Our team's activities center around dynamic systems, predominantly for scientific inquiry. Our interest is in the
physics-informed construction and use of digital twins in real-time control systems. Why? Our complex systems
can have millions of process variables, change over time, and the subsystems can influence one another. Further, on
top of controlling these systems as understanding of anomalies/prognostics (e.g. a component failing) we also
want to analyze in near real-time, for example, in one immediate project funded by EPSCoR, the materials properties of what the tool is probing. We are active users of the Argonne Leadership Computing Facility (ALCF) and are establishing our real-time connection between one of these analytical tool systems for both control and analysis. We will be soon deploying an edge-computing-based sub-system digital twin at the Facility for Rare Isotope Beams (FRIB) supported by the DOE SBIR program in Nuclear Physics. Scaling and realization of deep-learning aided digital twins on cloud and HPC systems. Here, I will present a few examples of aspects of these dynamic systems, including, an ion-based quantum information science (QIS) system, particle accelerators, and the precise formation of a 2-CubeSats satellite system. In this way, we hope to share our activities thus far and find synergies with others in the community here during the workshop and beyond to mutually enhance our goals in the deployment of digital twins.
At Elettra Sincrotrone Trieste there are several ongoing activities regarding Machine Learning and automation.
A research driven activity investigating Reinforcement Learning for the optimization of the FERMI Free Electron Laser has been carried out with promising results.
A framework based on the concept of Behavior Tree is employed in the autonomous operation
of the Elettra accelerator.
The combination of human defined automation and automatic optimization, improved by ML, is the baseline for the future developments.
In the past decade Deep Learning (DL) proved to be very successful in tasks which involve image, signal and text analysis and recognition. Due to the complexity of the synchrotron control system and the physical phenomena occurring in such an infrastructure, it can benefit from novel deep learning techniques. Currently at Solaris two projects that involve the machine learning techniques are realised. The goal of the first project is to develop a Neural Network which controls electron beam position in Solaris storage ring. As an input data feeding Neural Network BPMs readouts, current applied on main and corrector magnets and, optionally, beam profile from camera will be used. Since performance of Neural Network controlling only the beam position is easy to compare with standard methods, it was decided to implement a simple Neural Network as a starting point for a more advanced deep learning project at NSRC Solaris. Beam position RMS value below 1μm will serve as quantitative indicator of neural network quality. The main goal of the second project is to improve the signal analysis and anomaly detection in HD datasets with the use of advanced deep neural networks. An End-To-End anomaly score learning method for the early classification of occuring anomalies will be proposed. The interpretability analysis will allow us to locate the features that are responsible for large anomaly scores and therefore evaluate which features have the greatest influence on the electron beam.
At DESY different types of particle accelerators are operated and investigated. Particle accelerators are large, complex systems, with non-linear coupling between many components, and time varying, uncertain disturbances. Their operation is challenging due to are vast number of sub-components, high dimensionality, high data rates and different operating time scales, while at the same time there is an increasing demand in performance, flexibility and availability. To address this in the DESY machine division, machine learning is investigated as enabler for new operation modes. Different projects in this field are presented that are currently worked on, from the topics (1) data acquisition and analysis, (2) fault diagnosis and supervisory control, (3) models, simulations and digital twins and (4) optimization an feedback control algorithms.
Plasma based accelerators, conceptual or experimental, are characterised by high dimensional non-linearly coupled parameter spaces. Further, the cost of probing each set of parameters, i.e. a plasma simulation or a measurement, is typically high. This makes simple exploration approaches like multidimensional scans unpractical and calls for more advanced strategies to optimise parameters.
Here we discuss recent work of using Bayesian optimisation for the conceptual design and operation of experiments at the LUX laser plasma accelerator at DESY. Using a machine learning based optimiser we are able to exploit operation regimes of the accelerator with improved beam quality and stability.
The last few years have seen a strong increase in uptake of machine learning (ML) techniques in the domain of particle accelerators. ML techniques have been used in a number of applications, from anomaly detection to surrogate modelling, virtual diagnostics, tuning and control as well as advanced data analysis. This talk will explore the different applications where ML is becoming an increasingly valuable tool to meet new demands for beam energy, brightness, reliability and stability, and will review several successful case studies in a number of particle accelerator facilities.
At ALBA Synchrotron the pinhole imaging system is able to see 6 beam images at once. Each beam image has its own properties, such as pinhole size, its point spread function (PSF), copper filter attenuation and region of interest (ROI), all of which impact the source beam size calculation. For now, all these parameters are observed and controlled manually. An artificial neural net (ANN) is pointed at these beam images and trained to recognize which one it is looking at in real time, with the end goal to automate the whole beam image analysis process.
MXAimbot is a neural network based tool, designed to automate the task of centering samples for macro-molecular X-ray crystallography experiments before exposing the sample to the beam.
MXAimbot uses a convolutional neural network (CNN) trained on a few thousands images from an industrial vision camera pointed at the sample to predict suitable
crystal centering for subsequent data collection.
The motivation for this project is that the machine vision automated sample positioning allows X-ray laboratories and synchrotron beamlines to offer a more efficient alternative for the manual centering , which is time consuming and difficult to automate with conventional image analysis, and for the X-ray mesh scan centering , which can introduce radiation damage to the crystal. MXAimbot can be used to improve results of standard LUCID loop centering for fully automated data collection in fragment-screening campaigns. No need for sample rotation should be an additional advantage.
In the short term, the project aims at providing suggested centerings to users which they may reject if they are not satisified. In the long term the project aims to be able to operate fully without supervision, processing whole batteries of samples without a human present.
A few original approaches and CNN architectures were tested by the authors in [1,2]. They were using X-ray data resulting from mesh scans and not relying on manual annotations. Finally for a current production a more simple method inspired by a DeepCentering approach [3] from SPring-8, has been adopted. The original training dataset was manually annotated with bounding-boxes around each crystal and the new CNN architecture is using the annotated data. MXAimbot can be used by other systems via a REST API. The next step for the project is including MXAimbot into MXCuBE3 - the common data acquisition framework at several European synchrotron facilities. This allows collection of anonymised datasets from the sample vision camera in the BioMAX beamline at the MAX IV synchrotron which can be further used for training and optimisation of CNNs and later be seamlessly included as an additional option in the MXCuBE3 data collection pipeline.
To the authors knowledge CNNs have been implemented for crystal centering at least at two synchrotron facilities including MAX IV. So far the CNN approach has shown outstanding results in automatically positioning crystals. Work is currently underway to test and statistically compare the model predictions to the manual centerings by real users with the goal of integrating MXAimbot into the FragMAX [4] - fragment screening facility at the MAX IV sychrotron.
[1] SCHURMANN, Jonathan; LINDHÉ, Isak. Crystal Centering Using Deep Learning. LU-CS-EX 2019-25, 2019.
[2] SCHURMANN, Jonathan; LINDHÉ, Isak et al. Crystal centering using deep learning in X-ray crystallography. Asilomar Conference on Signals, Systems, and Computers, 2019, 978-983. doi: 10.1109/IEEECONF44664.2019.9048793
[3] ITO, Sho; UENO, Go; YAMAMOTO, Masaki. DeepCentering: fully automated crystal centering using deep learning for macromolecular crystallography. Journal of
synchrotron radiation, 2019, 26.4: 1361-1366. doi: 10.1107/S160057751900434X
[4] LIMA, Gustavo MA, et al. FragMAX: the fragment-screening platform at the MAX IV Laboratory. Acta Crystallographica Section D: Structural Biology, 2020, 76.8:
771-777. doi: 10.1107/S205979832000889X
Recent developments in photon science enable the investigation of structures and fundamental dynamics at nanometer and femtosecond scales. The corresponding imaging techniques such as Small Angle X-Ray Scattering (SAXS) at Grazing Incidence (GI-SAXS) or Ptychography produce imaging data at unprecedented spatial and temporal resolution. However, the reconstruction of relevant properties from the acquired incomplete X-ray intensities of SAXS, GI-SAXS or Ptychography requires to solve an ill-posed inverse problem which is commonly approached by Iterative reconstruction schemes that are typically time-consuming and require manual tuning of hyperparameters. Additionally, imaging of non-equilibrium processes prone to perturbations due to e.g. non-planar wavefronts hamper the usage of these methods even further and emphasise the need for very fast & automatic feedback systems. In this talk, we are introducing novel data-driven approaches for fast and reliable reconstruction of X-ray scattering data. The approaches can be seen as a combination of traditional data-driven methods, Bayesian statistics and optimisation resulting in reliable means for very fast reconstruction of known structures as well as robust reconstruction methods of previously unknown structures.
Artificial Intelligence, Digital Twins, Machine Learning are areas being explored in view of the SOLEIL upgrade.
Experimental data processing, control, online optimization, predictive maintenance become very hot topics and the center of the IT architecture transformation of the synchrotron light source. In this presentation I will give a short overview what has been achieved so far and the perspectives. A few applications for optimizing the accelerators will be shown, but also optimization of the assembly of undulators, smart feedbacks and feedforwards. Digital Twins will be short shown for robotics, anti-collision systems for the beamlines. Facing the data deluge and the growing complexity of the facility, a short roadmap for automation will also be presented for data acquisition, their post processing including the detector side.
The Institute for Beam Physics and Technology (IBPT) at the Karlsruhe Institute of Technology (KIT) hosts two research accelerator facilities, KARA and FLUTE, that serve as platforms for the development and testing of new beam acceleration technologies and new cutting-edge accelerator concepts, including Machine Learning (ML) methods. In this talk I will present three ML activities in accelerator physics carried out at KIT:
- Real-Time Control of the Micro-Bunching Instability with Reinforcement Learning
- Bayesian Optimization of the Injection Efficiency
- Machine Learning Towards Autonomous Accelerators: Control of the Bunch Profile with Reinforcement Learning
Longitudinal phase space (LPS) provides a critical information about electron beam dynamics for various scientific applications. For example, it can give insight into the high-brightness X-ray radiation from a free electron laser. Existing diagnostics are invasive, and often times cannot operate at the required resolution. In this work we present a machine learning-based Virtual Diagnostic (VD) tool to accurately predict the LPS for every shot using spectral information collected non-destructively from the radiation of relativistic electron beam. We demonstrate the tool’s accuracy for three different case studies with experimental or simulated data. For each case, we introduce a method to increase the confidence in the VD tool. We anticipate that spectral VD would improve the setup and understanding of experimental configurations at DOE’s user facilities as well as data sorting and analysis. The spectral VD can provide confident knowledge of the longitudinal bunch properties at the next generation of high-repetition rate linear accelerators while reducing the load on data storage, readout and streaming requirements.
Generally, turn-to-turn power fluctuations of incoherent spontaneous synchrotron radiation in a storage ring depend on the 6D phase-space distribution of the electron bunch. In some cases, if only one parameter of the distribution is unknown, this parameter can be determined from the measured magnitude of these power fluctuations. In this contribution, we report the results of our experiment at the Integrable Optics Test Accelerator (IOTA) storage ring, where we carried out an absolute measurement (no free parameters or calibration) of a small vertical emittance (5--15 nm rms) of a flat beam by this new method, under conditions, when the small vertical emittance is unresolvable by a conventional synchrotron light beam size monitor.
A large amount of information extraction work is done in the frequency domain. Information at different frequencies contains different physical meanings. The convolution kernel is also called a filter, because the convolution process is actually a filtering process. This means that a deep convolutional neural network is like a string of intelligent filter banks. It is helpful for the extraction of beam information. First, I will introduce some background knowledge of convolutional neural networks and beam diagnostics. In this part, I want to explain why a machine learning model that is usually used to process images is used to process electrical signals in beam virtual diagnostics. Later, the application of convolutional neural network in virtual beam diagnostic at SSRF will be introduced.
Electron ghost imaging has been established as a viable method that allows for advantages over traditional methods, such as making use of compressed sensing and a resolution increase from Fellgett's Advantage [1]. It has been applied to passive photocathode quantum efficiency mapping [2] and improving resolution within this context is discussed.
[1] S. Li, F. Cropp, K. Kabra, T. J. Lane, G. Wetzstein, P. Musumeci, and D. Ratner, Phys. Rev. Lett. 121, 114801 (2018)
[2] K. Kabra, S. Li, F. Cropp, Thomas J. Lane, P. Musumeci, and D. Ratner, Phys. Rev. Accel. Beams 23, 022803 (2020)
We present data-driving modeling of the European XFEL photoinjector using a deep learning-based autoencoder. We show that the autoencoder trained only with experimental data can make high-fidelity predictions of megapixel images
for the longitudinal phase-space measurement. We also discuss the practical challenges of building such an intelligent system for operation and propose a pragmatic way to model a photoinjector with various diagnostics and working points. The approach can possibly be extended to the whole accelerator and even other types of scientific facilities.