For the fourth year in a row, the FIFE Project hosted a two day workshop dedicated to improving the scientific computing for Intensity and Cosmic Frontier experiments at Fermilab. The first day focused on the new tools, resources and a roadmap (including a new logo) for the FIFE project in the future, and the second day consisted mostly of tutorials and best practice talks and concluded with one-on-one expert consultations. All presentations are publicly available at https://indico.fnal.gov/conferenceOtherViews.py?view=standard&confId=12120. With more than 60 attendees present, the discussion was lively and included ideas about access to high performance computing, GPU, and other new architectures.
Mike Kirby started the workshop by giving the FIFE roadmap including a new logo that represents the plan for new tools, resources and capabilities. This was followed by Margaret Votava giving the State of Facilities. The main message was the need to better utilize offsite resources as experiment CPU-hour requests continue to increase while onsite resources are maintained at the current levels. Ken Herner discussed ideas on batch management and procedures for experiment on-boarding for new services. Kevin Retzke gave the highlight talk of the workshop showing the impressive improvements in batch monitoring and https://fifemon.fnal.gov/. Several important new endeavors from Scientific Computing Division were introduced: the Production Offline Management Service (POMS), Continuous Integration Project, and the improved art profiling tools that allow users to monitor and optimize memory usage and CPU time of their modules. The afternoon finished with the discussion of future services such as HEPCloud, Security Roadmap, and Software Architecture.
Talks on best practices and tutorials filled the agenda for the second day. Best practice presentations on jobsub, dCache access, IF data handling client, SAM4Users, File Transfer Service, and how to select an Open Science Grid site opened the morning session. During these talks, users were given guidance on how to best incorporate into their workflows all of these tools for efficient resource utilization. In the afternoon, tutorials were given to show how to submit jobs to the OSG, monitor submitted jobs, and how to use SAM tools to transfer datasets to those jobs. These tutorials were said to be extremely useful for new and seasoned users alike.
While the workshop was very successful, feedback has been gathered to help improve the workshop for next year. We are considering separating the roadmap and new service session from the best practices and tutorials so that Offline Coordinators can focus on long term planning and computing models, while analyzers can focus on tutorials and best practices. In addition, the possibility of having tutorials and best practices talks more than just once a year is in the works. If you have comments or suggestions, please feel free to contact the organizers, Tanya Levshina (firstname.lastname@example.org) and Mike Kirby (email@example.com).
— Mike Kirby