FIFE Notes – April 2016

Best in Class

Experiment with the most opportunistic hours Feb. – March 2016

Experiment with the most opportunistic hours Feb. - March 2016

The experiment with the most opportunistic hours on OSG between Feb. 1, 2016 and March 31, 2016 was Mu2e with 4,804,996 hours.

Read more

Most efficient big non-production users Feb. – March 2016

The most efficient big non-production user on FermiGrid who used more than 100,000 hours since Feb. 1, 2015 was Willis K.Sakumoto with 100% efficiency.

Read more

Most efficient experiments Feb. – March 2016

Most efficient experiments Feb. - March 2016

The most efficient experiments on FermiGrid that used more than 100,000 hours since Feb. 1, 2016 were CDF (100%) and MU2E (85.75%).

Read more

This newsletter is brought to you by:

  • Dave Dykstra
  • Gabriele Garzoglio
  • Burt Holzman
  • Bo Jayatilaka
  • Mike Kirby
  • Arthur Kreymer
  • Katherine Lato
  • Tanya Levshina

We welcome articles you might want to submit. Please email

The complete material (for viewing offline) is available in the following formats:

Feature Articles

2016 Open Science Grid all-hands meeting

2016 Open Science Grid all-hands meeting

Every spring, the entire Open Science Grid (OSG) community--consisting of resource owners and operators, users, and staff--gathers at the annual OSG all-hands meeting. The 2016 OSG all-hands meeting was held between Monday, March 14 and Thursday, March 17 at Clemson University in Clemson, SC, thanks in large part to Jim Bottum, the CIO and vice provost for technology at Clemson. The OSG is, as befitting a vehicle for distributed high-throughput computing, a highly distributed organization, with a community spread out across the US. As such, the all-hands meeting offers one of the few opportunities for face-to-face interaction for this community. Some of the highlights of the past year in the OSG were noted, including the passage of the 1 billion CPU hour/year threshold of production and the OSG’s role in providing part of the computational infrastructure for the LIGO experiment, which recently announced the observation of gravitational waves.
Read more

HEP Cloud: How to add thousands of computers to your data center in a day

HEP Cloud: How to add thousands of computers to your data center in a day

Throughout any given year, the need of the HEP community to consume computing resources is not constant. It follows cycles of peaks and valleys driven by holiday schedules, conference dates and other factors. Because of this, the classical method of provisioning these resources at providing facilities has drawbacks, such as potential over-provisioning. Grid federations like Open Science Grid offer opportunistic access to the excess capacity so that no cycle goes unused. However, as the appetite for computing increases, so does the need to maximize cost efficiency by developing a model for dynamically provisioning resources only when they’re needed. To address this issue, the HEP Cloud project was launched by the Scientific Computing Division in June 2015.
Read more

DCAFI moving forward

The Distributed Computing Access with Federated Identities (DCAFI) project is moving forward on schedule and should be ready to start migrating the first experiment in June. For those of you who are unfamiliar with it, these are the motivations for the project:
  • Dependency on Kerberos makes it difficult for non-Fermilab scientists to access our grid resources remotely, obstructing our lab´s goal of being an international laboratory.
  • Fermilab's Kerberos Certificate Authority (KCA) server is losing its support starting September 2016, forcing us to find a replacement Certificate Authority for grid access.
  • Asking users to manage their own certificates is a burden on them we avoided with KCA-based grid access, and we want to continue to avoid it.
Read more

AFS transition

We will be turning off the Fermilab AFS servers in early May this year because the sort of worldwide file sharing once unique to AFS is now provided by the Web. Click here for details of the migration shutdown.

Read more

Click here for archive of past FIFE notes.

 About FIFE


Fife, Scotland: photo courtesy K. Lato

FIFE provides collaborative scientific-data processing solutions for Frontier Experiments.

FIFE takes the collective experience from current and past experiments to provide options for designing offline computing for experiments. It is not a mandate from Scientific Computing Division about how an experiment should or should not design their offline computing. FIFE is modular so experiments can take what they need, and new tools from outside communities can be incorporated as they develop.

Please click HERE for detailed documentation on the FIFE services and tools.