FIFE Notes – June 2016

Best in Class

Most efficient experiments April – May 2016

The most efficient experiments on GPGrid that used more than 100,000 hours since April 1, 2016 were CDMS (98.27%) and CDF (97.67%). 

Screen Shot 2016-06-08 at 8.41.25 AM

Read more

Most efficient big non-production users April – May 2016

The most efficient big non-production user on GPGrid who used more than 100,000 hours since April 1, 2016 was Willis K.Sakumoto with 100% efficiency.  Number of users with efficiency more than 90% has doubled since March! Read more

Experiment with the most opportunistic hours April – May 2016

The experiment with the most opportunistic hours on OSG between April 1, 2016 and May 31, 2016 was NOvA with 1,362,980 hours.

gratia-web-report (27)

Read more


This newsletter is brought to you by:

  • Paola Buitrago
  • Herbert Greenlee
  • Bo Jayatilaka
  • Mike Kirby
  • Arthur Kreymer
  • Katherine Lato
  • Tanya Levshina
  • Kevin Retzke

We welcome articles you might want to submit. Please email fife-support@fnal.gov.

Feature Articles

MINOS computing on the OSG

Screen Shot 2016-06-06 at 9.30.49 PMComputing in the MINOS/MINOS+ experiment has evolved greatly in the eleven years since it started taking data in the NuMI beam (April 2005). The scale has increased from the 50 core FNALU batch system to the 15000 cores of Fermigrid/GPGrid. As MINOS prepares to stop taking data at the 2016 Fermilab summer shutdown, another change will be the use of Fermibatch jobsub tools for opportunistic use of the Open Science Grid offsite. Read more

FIFE workshop focuses on services and tutorials

The annual FIFE Workshop will take place on June 20 and 21 this year with a focus on introducing new services and tutorials for current services. The talks on Monday are directed toward experiment Offline Coordinators and Production groups, and the talks on Tuesday are directed toward analyzers. The structure was chosen to allow attendees to more efficiently identify the talks they have the most interest in, but everyone is welcome to join and contribute to all parts of the workshop. Read more

Fifemon

Screen Shot 2016-06-07 at 2.33.05 PM"There's a dashboard for that" is the unofficial motto of Fifemon, and to that end we are constantly collecting more data and producing new dashboards. Since the last update, we have added nearly 20 new dashboards, including high-level computing summaries, dCache and SAM monitoring, and troubleshooting guides. In addition to these new dashboards, we have made many improvements to the existing dashboards. Read more for a look at upcoming changes, features and upgrades and to learn more about how Fifemon is impacting and how we are working with the scientific computing community outside of Fermilab. Read more

MicroBooNE data processing for Neutrino 2016

MicroBooNE began collecting Booster Neutrino Beam data on Oct. 15, 2015. The optical trigger system was commissioned on Feb. 10, 2016, and MicroBooNE has been collecting optically triggered data since then.

 Fig. 1

Fig. 1 shows the volume of data in sam, showing an increased rate of data storage in early April corresponding to the reprocessing campaign.

MicroBooNE has recently been engaged in various data processing campaigns for data reconstruction and Monte Carlo generation aimed at producing results for the Neutrino 2016 conference (July 4-9, 2016). Read more

Docker on HPC

The use of containers, like Docker, could substantially reduce the effort required to create and validate new software product releases, since one build could be suitable for use on both grid machines (both FermiGrid and OSG) as well as any machine capable of running the Docker container. Read more

Components in experiment’s workflow management systems infrastructure

Recently, a group in SCD identified and mapped different components typically found in the Workflow Management Infrastructure (WMS) of HEP experiments. The fact finding exercise resulted in a document that can be found in the CD DOCDB: http://cd-docdb.fnal.gov/cgi-bin/ShowDocument?docid=5742. Beyond its initial goal of setting a common vocabulary, this document is also useful for identifying gaps in the functionality provided by the infrastructure and/or identifying potential services that can be enhanced to provide new or missing functionality. Read more

Experience in production services

Huge amounts of computing resources are needed to process the data coming out of Intensity Frontier detectors. Although addressing different questions, most experiments have similarities in their workflows and computing needs. The OPOS team and the FIFE project capitalize on similarities with a set of tools and practices that incorporate lessons learned from previous experiments. I will briefly describe some of what I have witnessed during my time at Fermilab. Read more


Click here for archive of past FIFE notes.


About FIFE

fife2

Fife, Scotland: photo courtesy K. Lato

FIFE provides collaborative scientific-data processing solutions for Frontier Experiments.

FIFE takes the collective experience from current and past experiments to provide options for designing offline computing for experiments. It is not a mandate from Scientific Computing Division about how an experiment should or should not design their offline computing. FIFE is modular so experiments can take what they need, and new tools from outside communities can be incorporated as they develop.

Please click HERE for detailed documentation on the FIFE services and tools.