Author Archive

With the Muon g-2 experiment now taking data, it’s important to optimize its collection based on the physics at hand. g-2 has written a GPU-based simulation of the muon precession component of the muon anomalous magnetic moment. The simulation is used to test the Q-method, or charge integration, analysis of ωa.

The Continuous Integration system continues to be improved and new features added to fulfill user needs in terms of code testing. There are different way for users to keep testing their code to make sure that changes integrate into the existing code without breaking anything.

Following a very successful discussion at the previous FIFE Workshop, the FIFE Group has decided to start holding semiannual Roadmap discussions with experiment Offline and Production Coordinators. The goal of the Roadmap discussion is to both inform experiments and gather feedback about strategic infrastructure changes and computing service modifications. These workshops will replace the annual… More »

Experiments need ever-increasing computing capabilities and this trend is expected to continue.  The HEPCloud project is dedicated to meeting these needs as efficiently and cost-effectively as possible. Recently, GPGrid and Fifebatch went through a transition to better align the computing cluster with HEPCloud’s efforts.

PNFS Dos and Don’ts

A while back a user, let’s call him “Ken”, was trying to get some work finished on a very compressed timescale. It involved running a script that would generate some job scripts and stage files to dCache, and then submit jobs that take about one hour each. It was a well-tested workflow that followed FIFE… More »

Earlier this year the OSG deployed its new grid accounting system, GRACC. Developed by Fermilab with OSG collaborators, GRACC aims to provide a more flexible and scalable (and faster!) accounting system than its predecessor, Gratia.

A well-designed database can be a strong workhorse for an experiment.  However, if not built for the future that workhorse will age and become a detriment to analysis. As experiments scale up production during their lifecycles, adding more and faster CPUs, they require the same level of performance from the database. But that database can… More »

For the fifth year, experimenters and members of Scientific Computing Division (SCD) gathered for the annual FIFE Workshop. The workshop focus was divided between discussions of the FIFE roadmap on the first day and extensive tutorials on the second day. The workshop had more than 65 attendees from across all Frontiers (Intensity, Cosmic, and Energy)… More »

Currently, the batch system is not properly reporting CPU time used by jobs. Due to this, efficiency metrics for jobs are unavailable. We will update this post as soon as the issue is resolved.