Model compression and optimization of deep learning architectures for particle tracking at the LHC and beyond

This project will be performed in collaboration with the Maastricht Science Programme and Nikhef

Objective

With the anticipated upgrade of the Large Hadron Collider (LHC), particle physics is entering the high-luminosity (HL) era of the accelerator. This brings new challenges to particle track reconstruction, not only due to the extreme particle multiplicities but also because of high pile-up rates (i.e. multiple independent proton-proton collisions occurring within the same time window). These conditions make the use of efficient models for particle tracking critical. Deep Learning (DL) approaches are especially promising, as they can reduce computational resource usage while maintaining or even improving physics performance.

How

You will focus on the compression and optimization of DL models used for tracking, such as Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), and Graph Neural Networks (GNNs). You will explore techniques such as pruning, quantization, knowledge distillation, and low-rank decomposition, applying them to existing models and optimizing them for the reconstruction of low transverse momentum (low-pT) particle tracks.

Outputs

Application and comparison of various model compression techniques. Measurement of compression impact on model size, latency, and physics performance (e.g. efficiency vs fake track rates). If the validation yields state of the art results and if time permits, the publication and presentation of the results in an international conference.

Prerequisites

  1. Solid understanding of Machine Learning and Deep Learning fundamentals
  2. Programming skills in Python and/or C++