FIFE Notes – June 2017

FIFE Notes on Vacation

FIFE Notes is on vacation. We’ll see you again in October!

Read more

Best in Class

Experiment with the most opportunistic hours April-June 2017

The experiment with the most opportunistic hours on OSG between April 1, 2017 and June 1, 2017 was NOvA with 3,158,273 hours.
Read more

Most efficient big non-production users April-June 2017

The most efficient big non-production user on GPGrid who used more than 100,000 hours for successful jobs since April 1, 2017 is Jacob Todd with 98.3% efficiency.  
Read more

Most efficient experiments April-June 2017

The most efficient experiments on GPGrid that used more than 100,000 hours since April 1, 2017 were MARS (94%) and MINOS (93%).
Read more

This newsletter is brought to you by:

  • Shreyas Bhat
  • Lisa Giacchetti
  • Ken Herner
  • Bo Jayatilaka
  • Mike Kirby
  • Tanya Levshina
  • Andrew Norman
  • Margaret Votava

We welcome articles you might want to submit. Please email

Feature Articles

GPGrid Efficiency Policy

Efficiency Threshold Reference Table
Role memory CPU Success rate
Analysis 15% 35% 50%
Production 15% 35% 50%
POMS 15% 35% 50%
Job clusters with efficiency below these thresholds will be tagged as inefficient and the submitter will be contacted through email to diagnose and potentially modify their workflow. Total wall time for all jobs in a cluster must be greater than 500 hours to generate a warning email.
Read more

Reminder: Bluearc unmounting

Nearly all forms of scientific computing at Fermilab require some form of non-volatile storage. While the primary storage format for scientific data at Fermilab is tape-backed mass storage systems (MSS, consisting of Enstore and dCache), there are a variety of other storage solutions available, depending on the type of scientific computing that needs to be accomplished. Network attached storage (NAS), which at Fermilab is primarily BlueArc systems, provides a good platform of POSIX-compliant storage for interactive computing. It does not, however, provide a robust platform for large-scale parallel access from grid jobs. Furthermore, NAS space is not easily accessible from off-site computing, such as jobs run on remote grid sites and clouds via GPGrid or the HEPCloud interface.
Read more

DUNE Workshop Review

As membership of the DUNE collaboration approaches a thousand scientists from around the world, one of the challenges that the experiment faces is how to simulate the DUNE and ProtoDUNE detectors, and to analyze the data that these simulations will produce.  But if you are a new student or postdoc that has just joined DUNE, where do you get started?  The DUNE simulation code is  daunting even for veteran scientists, let alone for students who only have a few hot summer months in Chicago to make a difference on the world’s leading neutrino experiment before returning to their quiet university towns.
Read more

Singularity on the OSG

Have you heard about Singularity? You should probably wait until 2040 to see it, but meanwhile, OSG and FIFE teams are working hard to introduce Singularity to improve users’ experience on the grid.  When running jobs on the grid, one issue that users encounter is that their test environment, for example on an interactive node, and the grid environment may differ enough that their jobs that worked in testing might fail on the grid.  Once these jobs do get running though, Fermilab and other sites currently use gLExec to make sure the jobs are running as the user who submitted them.  However, this can also cause numerous issues that can make jobs go held.
Read more

FERRY – Frontier Experiments RegistRY

Have you ever wondered about what happens when a new postdoc joined an experiment, or if someone you’re collaborating with wanted to run a production workflow? By now, you’re probably used to accessing ServiceNow, navigating through pretty complicated choices, selecting an appropriate form, and submitting the request. Do you want to know what happens next? Probably not…
Read more

Click here for archive of past FIFE notes.

About FIFE


FIFE provides collaborative scientific-data processing solutions for Frontier Experiments.

FIFE takes the collective experience from current and past experiments to provide options for designing offline computing for experiments. It is not a mandate from Scientific Computing Division about how an experiment should or should not design their offline computing. FIFE is modular so experiments can take what they need, and new tools from outside communities can be incorporated as they develop.

Please click HERE for detailed documentation on the FIFE services and tools.