FIFE Notes – December 2016

Best in Class

Most efficient experiments October – December 2016

The most efficient experiments on GPGrid that used more than 100,000 hours since October 1, 2016 were LArIAT (96.11%) and Minos (95.5%).

efficiency_map_20161201

Read more

Most efficient big non-production users October – December 2016

The most efficient big non-production user on GPGrid who used more than 100,000 hours since October 1, 2016 was Konstantinos Vellidis with 98.7% efficiency.

Read more

Experiment with the most opportunistic hours October – December 2016

The experiment with the most opportunistic hours on OSG between October 1, 2016 and November 30, 2016 was mu2e with 3,607,577 hours.

opp_wallhrs_20161201

Read more

What’s new in Fifemon

There are some new dCache dashboards in Fifemon.

dcachedashboardlist_20161202

Read more


This newsletter is brought to you by:

  • Shreyas Bhat
  • Joe Boyd
  • Vito Di Benedetto
  • Lisa Giacchetti
  • Ken Herner
  • Burt Holzman
  • Mike Kirby
  • Tanya Levshina
  • Anna Mazzacane
  • Marc Mengel
  • Kevin Retzke

We welcome articles you might want to submit. Please email fife-support@fnal.gov.

Feature Articles

How to make the most of your holiday break

How to get a whole bunch of jobs going while everyone else is sipping eggnog

While everyone enjoys a break from work this time of year, one thing that won’t be taking a break is grid computing. GPGrid will run at full capacity at all times, as will many of the usual offsite computing clusters. We encourage users to continue to submit jobs so that they can run over the holidays.

Read more

HEPCloud doubles the size of CMS computing

High-energy physics experiments have an ever-growing need for computing, but all the experiments don't need all the cycles all the time. The need is driven by machine performance, experiment and conference schedules, and even new physics ideas. Computing facilities are purchased with the intention to meet peak workload rather than the average, which impacts the overall computing cost for the facility and the experiment.

Read more

MINOS running on Stampede

The HEP computing model is constantly evolving, and one change that is currently taking place is increased use of High Performance Computing (HPC) resources. Some of these HPC resources include supercomputing sites such as NERSC, as well as the EXtreme Science and Engineering Discovery Environment (XSEDE). XSEDE is actually a collection of several HPC resources, including the Stampede cluster at the Texas Advanced Computing Center.

Read more

New developments in Continuous Integration (CI)

Since the first article appeared in the August 2016 edition of FIFE Notes, the Continuous Integration (CI) project has been implementing new features and on-boarding new experiments and collaborations. DUNE, GlideinWMS, MINERvA and GENIE are ready to try it out. NOvA is using CI extensively for their production releases. Alex Himmel, the NOvA production coordinator, discussed their experience at the October CS liaison meeting. "Continuous integration has been a major benefit to NOvA -- it allows us to catch issues one-by-one as they happen instead of all at once during an official production campaign. In just the last few months it has already saved us from many headaches," Himmel said.  Read more

Fifemon monitoring of data transfers

Back in about 2012, when we were designing the IFDH layer to insulate experimenter's code from the gory details of data handling and operating on the grid, I drew a diagram that included the ifdh copy utility logging all the copies to a central logging facility and an agent of the monitoring system scraping those logs to provide counts, transfer rates, etc. While this never really got off the ground in the early versions of Fifemon, the current implementation, which uses Elasticsearch tools to collect statistics from logging data, has brought it to a complete implementation. There are now two dashboards in Fifemon that provide a view of those logged copies. These dashboards are currently in pre-production. When they are in production, this post will be updated with the links. Read more


Click here for archive of past FIFE notes.


About FIFE

fife_logo_lowres

FIFE provides collaborative scientific-data processing solutions for Frontier Experiments.

FIFE takes the collective experience from current and past experiments to provide options for designing offline computing for experiments. It is not a mandate from Scientific Computing Division about how an experiment should or should not design their offline computing. FIFE is modular so experiments can take what they need, and new tools from outside communities can be incorporated as they develop.

Please click HERE for detailed documentation on the FIFE services and tools.