ProtoDUNE WMS workshop

Screen Shot 2016-08-05 at 9.38.09 AMEvery physics experiment plans several years in advance. Accurately understanding the needs and defining the computational requirements is fundamental to the discovery and success of the experiment. This planning process needs to account for several unknowns, project the technological advancements with a reasonable level of approximation and incorporate them in the planning process. This process is even more challenging for experiments like DUNE and protoDUNE with their wide international collaborations that are still far from the data-taking phase. Amir Farbin along with his DUNE/protoDUNE colleagues are working on defining the computational requirements for the distributed data and workflow management for DUNE/protoDUNE collaboration.

To help our DUNE/protoDUNE fellow colleagues, Amir Farbin and Oliver Gutsche organized a mini workshop on “DUNE Workflow and Distributed Data Management” on July 28 and 29 at Fermilab. This workshop was intended to:

  • Be a first assessment of how DUNE will store, process, and analyze data on a wide array of resources that are not limited to Fermilab or a single administrative boundary.
  • Understand the requirements for protoDUNE, which will likely include seamless integration of Fermilab resources with CERN, GRID, and possibly HPC and opportunistic resource.

The workshop was well represented by the Workflow (WM) and Data Management (WM) experts from CMS, ATLAS, FIFE and Fermilab SCD. DM and WM experts talked about various tools and services used by their experiment and shared their experiences to help DUNE/protoDUNE define requirements. This workshop was also helpful to FIFE to identify and plan for overcoming the gaps in its infrastructure. This is extremely useful if DUNE/protoDUNE decides to use the FIFE WM & DM tools and services across its collaboration.

— Parag Mhashilkar