Best in Class
The most efficient experiments on GPGrid that used more than 100,000 hours since June 1, 2016 were LArIAT (98.70%) and MINOS (95.96%).
The most efficient big non-production user on GPGrid who used more than 100,000 hours since June 1, 2016 was Jacob R. Todd with 99.6% efficiency.
The experiment with the most opportunistic hours on OSG between June 1, 2016 and July 31, 2016 was NOvA with 659,139 hours.
Did you know Grafana has two themes - light (white background) and dark (black background)? You can pick your default theme in your User Profile
Overwhelmed by the number of jobs showing up in a table? Check out the list of filters in the drop-down above the table to help narrow it down.
This newsletter is brought to you by:
- Mine Altunay
- Shreyas Bhat
- Lisa Giacchetti
- Mike Kirby
- Katherine Lato
- Tanya Levshina
- Anna Mazzacane
- Parag Mhashilkar
- Kevin Retzke
We welcome articles you might want to submit. Please email email@example.com.
For the fourth year in a row, the FIFE Project hosted a two day workshop dedicated to improving the scientific computing for Intensity and Cosmic Frontier experiments at Fermilab. The first day focused on the new tools, resources and a roadmap (including a new logo)
for the FIFE project in the future, and the second day consisted mostly of tutorials and best practice talks and concluded with one-on-one expert consultations. All presentations are publicly available at https://indico.fnal.gov/conferenceOtherViews.py?view=standard&confId=12120
. With more than 60 attendees present, the discussion was lively and included ideas about access to high performance computing, GPU, and other new architectures.
Since the last FIFE Newsletter there have been two Fifemon updates, v3.2 and v3.3. Notable new features include: SAM project monitoring, Grafana update, batch history and much more.
The Continuous Integration (CI) project’s goal is to reduce the amount of human effort needed to verify each code release, and thus to reduce the frequency of wasted computing and human resources. The CI system is a set of tools, applications and machines that allows users to execute their validation tests with minimal effort. It is based on the open source Jenkins toolkit, which offers a powerful tool for complex software, and associated database, testing interfaces, and web facilities. More information (including links to detailed instructions on how to run CI tests) can be found at https://cdcvs.fnal.gov/redmine/projects/lar-ci/wiki
Experiments/collaborations already on-boarded are: uBooNE, DUNE (35T), LArIAT, ArgoNeut (from the existing LAr-CI), NOvA and art.
Other experiments have expressed interest and will soon be on-boarded.
Every physics experiment plans several years in advance. Accurately understanding the needs and defining the computational requirements is fundamental to the discovery and success of the experiment. This planning process needs to account for several unknowns, project the technological advancements with a reasonable level of approximation and incorporate them in the planning process. This process is even more challenging for experiments like DUNE and protoDUNE with their wide international collaborations that are still far from the data-taking phase. Amir Farbin along with his DUNE/protoDUNE colleagues are working on defining the computational requirements for the distributed data and workflow management for DUNE/protoDUNE collaboration.
The Distributed Computing Access with Federated Identities (DCAFI) Project has been moving full steam ahead this summer. The short-term goal of the project is to move all Fermilab users from Fermi KCA, which is being planned to shut down at the end of September, to the new Certificate service provided by the CILogon Basic CA. However, the long-term goal is more ambitious than that: making access to Fermilab easy and convenient for all Fermilab users, even for those without Fermilab accounts. Read more
Fife, Scotland: photo courtesy K. Lato
FIFE provides collaborative scientific-data processing solutions for Frontier Experiments.
FIFE takes the collective experience from current and past experiments to provide options for designing offline computing for experiments. It is not a mandate from Scientific Computing Division about how an experiment should or should not design their offline computing. FIFE is modular so experiments can take what they need, and new tools from outside communities can be incorporated as they develop.
Please click HERE for detailed documentation on the FIFE services and tools.