FIFE Notes – October 2016

Best in Class

Most efficient experiments August – September 2016

The most efficient experiments on GPGrid that used more than 100,000 hours since August 1, 2016 were CDMS (99.15%) and LArIAT (98.53%)

efficientexperiment_1610

Read more

Most efficient big non-production users August – September 2016

The most efficient big non-production user on GPGrid who used more than 100,000 hours since August 1, 2016 was Tommaso Pajero with 99.1% efficiency.

Read more

Experiment with the most opportunistic hours August–September 2016

The experiment with the most opportunistic hours on OSG between August 1, 2016 and September 30, 2016 was mu2e with 2,252,507 hours.

opphours_1610

Read more

Fifemon tips

Interested in seeing what the batch jobs for your SAM project are doing? Go to the SAM Project Summary dashboard and select your project name from the dropdown. 

fifemon_sam_1610

We recently introduced a User Overview dashboard, which shows you at a ... Read more


This newsletter is brought to you by:

  • Mine Altunay
  • Shreyas Bhat
  • Lisa Giacchetti
  • Ken Herner
  • Robert Illingworth
  • Mike Kirby
  • Art Kreymer
  • Tanya Levshina
  • Anna Mazzacane
  • Marc Mengel
  • Kevin Retzke
  • Andrew Romero
  • Jeny Teheran

We welcome articles you might want to submit. Please email fife-support@fnal.gov.

Feature Articles

DCAFI phase I close out and phase II prospective

The first phase of the Distributed Computing Access with Federated Identities (DCAFI) Project was successfully completed in August 2016. All Fermilab users and experiments have been transitioned to the new certificate service provided by CILogon Basic Certificate Authority (CA). Thanks to the hard work of FIFE support personnel and the DCAFI project team, all of the activities in Phase 1 were completed on schedule and with minimal impact to the VOs' scientific tasks. 

Read more

Batch computing direct access to BlueArc ending

We discussed plans for unmounting the BlueArc data areas from Fermigrid worker nodes in the December 2015 issue of the FIFE Notes. As noted in that article, the overall data rates needed on Fermigrid exceed the capacity of the current BlueArc NFS servers. We are removing all access to the BlueArc /*/data and /*/ana areas from Fermigrid worker nodes. Both direct NFS mounts and access via Gridftp with ifdh cp will be removed. On request, we will retain the ifdh cp path for a limited, specified time during the transition. New experiments like Dune are being deployed to Fermigrid without worker node BlueArc data access. MINOS and MicroBooNE are unmounting their areas in October, with MINERVA following in November, and other experiments soon to be scheduled. Read more

POMS: handing control over to experiments

The Production Operations Management System (POMS) was initially developed for the OPOS group to help them effectively manage job submissions for multiple experiments.

Read more

StashCache speeds up data access

StashCache is an OSG service that aims to provide more efficient access to certain types of data across the Grid. Most jobs end up copying their input files all the way from Fermilab every time they run, which can be slow and inefficient. In some cases, the files get reused multiple times – an example  of this is the flux files used as input to GENIE simulations, where each individual job uses a random sub-selection from the entire dataset. When these jobs run opportunistically on grid sites, they would be more efficient if the data could be fetched from somewhere close by. The StashCache project aims to help with this.

Read more

News from ICHEP and CHEP

This past August saw a record number of physicists in Chicago for the International Conference on High Energy Physics. The 38th installment of this biannual conference featured several presentations by SCD members in not only the Computing and Data Handling track, but also in the Astroparticle, Detector R&D, Higgs, and Neutrino tracks. FIFE was especially ... Read more


Click here for archive of past FIFE notes.


About FIFE

 

fife_logo_lowres

FIFE provides collaborative scientific-data processing solutions for Frontier Experiments.

FIFE takes the collective experience from current and past experiments to provide options for designing offline computing for experiments. It is not a mandate from Scientific Computing Division about how an experiment should or should not design their offline computing. FIFE is modular so experiments can take what they need, and new tools from outside communities can be incorporated as they develop.

Please click HERE for detailed documentation on the FIFE services and tools.