Best in Class
The most efficient experiments on GPGrid that used more than 100,000 hours since August 1, 2016 were CDMS (99.15%) and LArIAT (98.53%)
The most efficient big non-production user on GPGrid who used more than 100,000 hours since August 1, 2016 was Tommaso Pajero with 99.1% efficiency.
The experiment with the most opportunistic hours on OSG between August 1, 2016 and September 30, 2016 was mu2e with 2,252,507 hours.
Interested in seeing what the batch jobs for your SAM project are doing? Go to the SAM Project Summary dashboard and select your project name from the dropdown.
We recently introduced ...
This newsletter is brought to you by:
- Mine Altunay
- Shreyas Bhat
- Lisa Giacchetti
- Ken Herner
- Robert Illingworth
- Mike Kirby
- Art Kreymer
- Tanya Levshina
- Anna Mazzacane
- Marc Mengel
- Kevin Retzke
- Andrew Romero
- Jeny Teheran
We welcome articles you might want to submit. Please email firstname.lastname@example.org.
The first phase of the Distributed Computing Access with Federated Identities (DCAFI) Project was successfully completed in August 2016. All Fermilab users and experiments have been transitioned to the new certificate service provided by CILogon Basic Certificate Authority (CA). Thanks to the hard work of FIFE support personnel and the DCAFI project team, all of the activities in Phase 1 were completed on schedule and with minimal impact to the VOs' scientific tasks.
StashCache is an OSG service that aims to provide more efficient access to certain types of data across the Grid. Most jobs end up copying their input files all the way from Fermilab every time they run, which can be slow and inefficient. In some cases, the files get reused multiple times – an example of this is the flux files used as input to GENIE simulations, where each individual job uses a random sub-selection from the entire dataset. When these jobs run opportunistically on grid sites, they would be more efficient if the data could be fetched from somewhere close by. The StashCache project aims to help with this.
This past August saw a record number of physicists in Chicago for the International Conference on High Energy Physics. The 38th installment of this biannual conference featured several presentations by SCD members in not only the Computing and Data Handling track, but also in the Astroparticle, Detector R&D, Higgs, and Neutrino tracks. FIFE was especially ...
FIFE provides collaborative scientific-data processing solutions for Frontier Experiments.
FIFE takes the collective experience from current and past experiments to provide options for designing offline computing for experiments. It is not a mandate from Scientific Computing Division about how an experiment should or should not design their offline computing. FIFE is modular so experiments can take what they need, and new tools from outside communities can be incorporated as they develop.
Please click HERE for detailed documentation on the FIFE services and tools.