MicroBooNE data processing for Neutrino 2016

MicroBooNE began collecting Booster Neutrino Beam data on Oct. 15, 2015. The optical trigger system was commissioned on Feb. 10, 2016, and MicroBooNE has been collecting optically triggered data since then.

 Fig. 1

Fig. 1 shows the volume of data in sam, showing an increased rate of data storage in early April corresponding to the reprocessing campaign.

MicroBooNE has recently been engaged in various data processing campaigns for data reconstruction and Monte Carlo generation aimed at producing results for the Neutrino 2016 conference (July 4-9, 2016).

Monte Carlo generation for Neutrino 2016 (MCC7) began in early February 2016.  A new version of the reconstruction program was released in early April. Over the subsequent weeks, all raw data going back to the start of the optical trigger (130 TB) as well as all existing MCC7 Monte Carlo samples (30 TB) were reconstructed using this version of the reconstruction program.

Much of MicroBooNE’s Monte Carlo simulation was done off site on the OSG.

Fig. 2

Fig. 2 shows OSG cpu usage by site, peaking in February corresponding to the start of the MCC7 simulation campaign.

MicroBooNE has faced a particular challenge in running GEANT simulation as these jobs required very large memory (up to 8 GB).

Running large memory jobs on the grid requires specially configured large memory batch nodes or allocating multiple batch slots for a single batch job. An SCD team studied the problem and was able to reduce peak memory usage for MicroBooNE’s geant simulation from 8 to about 3.6 GB (Fig. 3).

Fig. 3

The peak memory usage for MicroBooNE production jobs calculating simulated neutrino interactions from GENIE along with cosmic ray simulation from CORSIKA. Consultants from SCD helped to reduce the peak usage from 8 GB to 3.6 GB of memory.

– Herbert Greenlee