News

CERN Prepares for New Computing Challenges with Large Hadron Collider

None
May 9, 2018

By: Michael Feldman

Thanks to the discovery of the Higgs boson in 2012, CERN’s Large Hadron Collider (LHC) has probably become the most widely recognized science project on the planet. Now almost 10 years old, the 27-kilometer ring of superconducting magnets is the world’s largest and most capable particle accelerator. As such, it enables physicists to push the envelope of particle physics research.

Less well-known is the computing infrastructure that supports this effort – that of the The Worldwide LHC Computing Grid (WLCG), a network of more than 170 computing centers spread across 42 countries. Because LHC experiments can involve processing petabytes of data at a time, the computational, networking, and storage challenges for the project are immense. And when the next-generation High-Luminosity LHC (HL-LHC) is launched in 2026, these challenges will become even more formidable.

To find out more about what this entails, we asked Dr. Maria Girone, CERN’s openlab CTO, to describe the high performance computing technology that undergirds the LHC work and talk about what kinds of hardware and software are being considered to support the future HL-LHC machine. Below is a lightly edited transcript of our conversation.

TOP500 News: Can you outline a typical computing workflow for an LHC application – for example, the workflow that resulted in the discovery of the Higgs boson particle?

Maria Girone: Workflows in high-energy physics typically involve a range of both data-intensive and compute-intensive activities. The collision data from the cathedral-sized detectors on the Large Hadron Collider needs to be filtered to select a few thousand interesting collisions from as many as one billion that may take place each second. The search for new phenomena is like looking for needles in enormous haystacks.

Once interesting collision events have been selected the processing-intensive period begins. The particles from each collision in the detectors are carefully tracked, the physics objects are identified, and the energy of all the elements are measured with extreme precision.

At the same time, simulation takes place on the Worldwide LHC Computing Grid, the largest collection of computing resources ever assembled for a single scientific endeavor. The WLCG produces a massive sample of billions of simulated beam crossings, trying to predict the response of the detector and compare it to known physics processes and potential new physics signals. In this analysis phase, the data from the detector is examined against predictions based on known background-only signals. When the data diverges statistically significantly from the background-only signals, we declare a discovery.

TOP500 News: Do any of the experiments in the LHC project currently use deep learning of some other form of AI?

Girone: Neural networks and machine-learning techniques have been used in high-energy physics for many years. Optimization techniques, such as boosted decision trees, have been widely used in analysis. The field is now looking to expand the use of deep-learning and AI techniques based on the progress made by industry in these areas.

There is potential for applications throughout the data-selection and processing chain, which could increase the efficiency and performance of the physics searches. Other areas we are exploring include object identification based on 3D image-recognition techniques, improved simulation using adversarial networks, better monitoring via anomaly-detection techniques, and optimized resource use through machine-learning algorithms.

TOP500 News: How are the computing challenges for the Large Hadron Collider different from typical HPC simulations?

Girone: The computing challenges of the LHC differ from typical HPC applications in the structure of the problem, the time-scale of the program and the number of contributors to the code. Whether processing a data event or producing simulations, each collision event can be treated independently. This means that the application lends itself to simple parallelization across many nodes.

With the LHC program running over multiple decades, there is need for software to be continuously improved. Occasionally, big components are reworked entirely, and there is also a lot of legacy code and services to support.

For many applications used in high-energy physics application, it is the case that several hundred people may well have contributed to the code base over many years. Traditional HPC simulations are often developed by much smaller groups of contributors, with more specific expertise in this area. Our use of code developed by very large numbers of contributors makes it challenging to reach the level of optimization often achieved with other HPC codes.

TOP500 News: Given that the computing grid is spread around the world, what types of challenges are encountered with regard to sharing the large datasets associated with LHC work?

Girone: Data management has been a consistent area of development in LHC computing. We move petabytes per day and all the data needs to be monitored for consistency. We have become leaders in moving data using global networks. In the last few years, we have augmented the traditional techniques for moving and replicating data to provide real-time remote access to data files across the globe. Our global data-access model has helped to optimize the use of processing and storage resources, as well as making it possible to use commercial cloud and HPC resources in an opportunistic manner.

TOP500 News: Can you describe what the High-Luminosity LHC will be able to do that cannot currently be done with the present-day LHC?

With the HL-LHC, about five to ten times more beam crossings will take place compared to today. Each of these crossings will result in about five times as many individual proton-proton collisions. This increase will help us to search for rarer signals and more precisely measure rare phenomena.

TOP500 News: As you look ahead to the High Luminosity LHC, what emerging technologies in hardware and software do you think are the most promising?

Girone: Looking forward to the HL-LHC, there are many interesting new technologies. The continued improvements in networking technologies will help us to continue distributing data efficiently. The progress made with various types of accelerators is being explored too. We have programs with GPUs and FPGAs that have the potential to dramatically improve the performance of the computing systems.

For software, better optimization and code modernization also hold great promise. Finally, new techniques like advanced data analytics and deep learning have the potential to change how analysis and reconstruction are performed, thus enabling us to process more data more efficiently.

Dr. Girone will present greater depth on the subject of computing challenges at CERN during the opening keynote of the 2018 ISC High Performance Conference, which will take place on June 24-28 in Frankfurt, Germany. Her keynote address will take place on Monday, June 25.

Images: Maria Girone; CERN IT center.  © CERN