g-2 optimizes DAQ with GPUs on the OSG

With the Muon g-2 experiment now taking data, it’s important to optimize its collection based on the physics at hand. g-2 has written a GPU-based simulation of the muon precession component of the muon anomalous magnetic moment. The simulation is used to test the Q-method, or charge integration, analysis of ωa.

The Q-method is performed by summing multiple fills into a single flush, and then summing the flushes to produce a positron time spectrum. The CUDA-based code simulates this quickly by generating a large number of flushes in parallel on a GPU. The code has been used to test methodologies for dealing with systematic errors from pileup, fast rotation and variations in the pedestal. The results of these simulations have been valuable in determining the best set of parameters to use in the online data acquisition system.

While this simulation is much faster than the full-fledged Geant-based simulation, it still takes many days to produce a full dataset of 1011 positrons. By running the simulation on the Open Science Grid nodes with GPUs, g-2 was able to simulate the entire expected Muon g-2 dataset in about 2.5 hours. This fast turnaround time will allow them to perform much more extensive tests of the Q-method systematics, which will inform choices governing how g-2 data are acquired and analyzed.

The GPU nodes on the OSG that g-2 used for this campaign are available to all FIFE users via jobsub. There are instructions in the most recent FIFE roadmap talk (see https://indico.fnal.gov/event/15555/contribution/1/material/slides/1.pdf), and you can always contact FIFE for additional support.

–Wes Gohn and Ken Herner