Best in Class
The experiment with the most opportunistic hours on OSG between Feb. 1, 2016 and March 31, 2016 was Mu2e with 4,804,996 hours.
The most efficient big non-production user on FermiGrid who used more than 100,000 hours since Feb. 1, 2015 was Willis K.Sakumoto with 100% efficiency.
The most efficient experiments on FermiGrid that used more than 100,000 hours since Feb. 1, 2016 were CDF (100%) and MU2E (85.75%).
This newsletter is brought to you by:
- Dave Dykstra
- Gabriele Garzoglio
- Burt Holzman
- Bo Jayatilaka
- Mike Kirby
- Arthur Kreymer
- Katherine Lato
- Tanya Levshina
We welcome articles you might want to submit. Please email email@example.com.
The complete material (for viewing offline) is available in the following formats:
Every spring, the entire Open Science Grid (OSG) community--consisting of resource owners and operators, users, and staff--gathers at the annual OSG all-hands meeting. The 2016 OSG all-hands meeting was held between Monday, March 14 and Thursday, March 17 at Clemson University in Clemson, SC, thanks in large part to Jim Bottum, the CIO and vice provost for technology at Clemson. The OSG is, as befitting a vehicle for distributed high-throughput computing, a highly distributed organization, with a community spread out across the US. As such, the all-hands meeting offers one of the few opportunities for face-to-face interaction for this community. Some of the highlights of the past year in the OSG were noted, including the passage of the 1 billion CPU hour/year threshold of production and the OSG’s role in providing part of the computational infrastructure for the LIGO experiment, which recently announced the observation of gravitational waves.
Throughout any given year, the need of the HEP community to consume computing resources is not constant. It follows cycles of peaks and valleys driven by holiday schedules, conference dates and other factors. Because of this, the classical method of provisioning these resources at providing facilities has drawbacks, such as potential over-provisioning. Grid federations like Open Science Grid offer opportunistic access to the excess capacity so that no cycle goes unused. However, as the appetite for computing increases, so does the need to maximize cost efficiency by developing a model for dynamically provisioning resources only when they’re needed. To address this issue, the HEP Cloud project was launched by the Scientific Computing Division in June 2015.
- Dependency on Kerberos makes it difficult for non-Fermilab scientists to access our grid resources remotely, obstructing our lab´s goal of being an international laboratory.
- Fermilab's Kerberos Certificate Authority (KCA) server is losing its support starting September 2016, forcing us to find a replacement Certificate Authority for grid access.
- Asking users to manage their own certificates is a burden on them we avoided with KCA-based grid access, and we want to continue to avoid it.
We will be turning off the Fermilab AFS servers in early May this year because the sort of worldwide file sharing once unique to AFS is now provided by the Web. Click here for details of the migration shutdown.
FIFE provides collaborative scientific-data processing solutions for Frontier Experiments.
FIFE takes the collective experience from current and past experiments to provide options for designing offline computing for experiments. It is not a mandate from Scientific Computing Division about how an experiment should or should not design their offline computing. FIFE is modular so experiments can take what they need, and new tools from outside communities can be incorporated as they develop.
Please click HERE for detailed documentation on the FIFE services and tools.