Best in Class
The experiment with the most opportunistic hours on OSG between Oct. 1, 2017 and Dec. 1, 2017 was NOvA with 1.765 million hours.

Read moreThe most efficient big non-production user on GPGrid who used more than 100,000 hours for successful jobs since Oct 1, 2017 is Igor Tropin with 99.7% efficiency.
Read moreThe most efficient experiments on GPGrid that used more than 100,000 hours since Oct. 1, 2017 were MARS (98%) and MINOS (95%).
Read more
This newsletter is brought to you by:
- Shreyas Bhat
- Vito Di Benedetto
- Dave Dykstra
- Lisa Giacchetti
- Wes Gohn
- Ken Herner
- Mike Kirby
- Tanya Levshina
- Parag Mhashilkar
- Kevin Retzke
We welcome articles you might want to submit. Please email fife-support@fnal.gov.
Feature Articles
There was an interesting trend in the US nuclear power industry through the 1990s and 2000s: despite no new plants being built, net electricity generation

increased almost continuously, averaging around 2% gained per year even as some plants were being shut down. Instead of building new plants, utilities were finding ways to generate more from the existing infrastructure. In much the same way, scientific computing is working to make better use of the existing grid computing resources we have, so even as budget constraints limit how much new physical hardware we can purchase, we can continue to expand our effective capacity and continue to support the increasing demand for computing.
Read moreWith the Muon g-2 experiment now taking data, it’s important to optimize its collection based on the physics at hand. g-2 has written a GPU-based simulation of the muon precession component of the muon anomalous magnetic moment. The simulation is used to test the Q-method, or charge integration, analysis of ωa.
Read moreBecause of the planned removal of Network Attached Storage (Bluearc) mounts from worker nodes, all experiments and projects will be expected to distribute their software to worker nodes with CVMFS. Many already do, but now the remaining ones will need to transition to CVMFS. This article is for them.
Read moreThe Continuous Integration system continues to be improved and new features added to fulfill user needs in terms of code testing. There are different way for users to keep testing their code to make sure that changes integrate into the existing code without breaking anything.
Read moreThe HEPCloud program had a very productive 2017, successfully delivering several milestones targeted for this year!
Since early this year, the team has been working on designing a new Decision Engine (DE) based on a framework architecture that can be extended to support future needs. The DE is an intelligent decision

support system and is the heart of provisioning in the HEPCloud. The DE uses information from several sources such as job queues, monitoring systems like graphite, finances, allocations at NERSC, etc. to make intelligent decisions on facility expansion. In November, the HEPCloud team successfully demonstrated the newly developed DE. As part of the demo, the DE expanded the HEPCloud facility by provisioning over 1400 resources in the AWS and at NERSC. Currently, we are focusing on adding more functionality to the DE and hardening it to run at production scale.
Read more
About FIFE

FIFE provides collaborative scientific-data processing solutions for Frontier Experiments.
FIFE takes the collective experience from current and past experiments to provide options for designing offline computing for experiments. It is not a mandate from Scientific Computing Division about how an experiment should or should not design their offline computing. FIFE is modular so experiments can take what they need, and new tools from outside communities can be incorporated as they develop.
Please click HERE for detailed documentation on the FIFE services and tools.