There was an interesting trend in the US nuclear power industry through the 1990s and 2000s: despite no new plants being built, net electricity generation
increased almost continuously, averaging around 2% gained per year even as some plants were being shut down. Instead of building new plants, utilities were finding ways to generate more from the existing infrastructure. In much the same way, scientific computing is working to make better use of the existing grid computing resources we have, so even as budget constraints limit how much new physical hardware we can purchase, we can continue to expand our effective capacity and continue to support the increasing demand for computing.
With the Muon g-2 experiment now taking data, it’s important to optimize its collection based on the physics at hand. g-2 has written a GPU-based simulation of the muon precession component of the muon anomalous magnetic moment. The simulation is used to test the Q-method, or charge integration, analysis of ωa
Because of the planned removal of Network Attached Storage (Bluearc) mounts from worker nodes, all experiments and projects will be expected to distribute their software to worker nodes with CVMFS. Many already do, but now the remaining ones will need to transition to CVMFS. This article is for them.
The Continuous Integration system continues to be improved and new features added to fulfill user needs in terms of code testing. There are different way for users to keep testing their code to make sure that changes integrate into the existing code without breaking anything.
The HEPCloud program had a very productive 2017, successfully delivering several milestones targeted for this year!
Since early this year, the team has been working on designing a new Decision Engine (DE) based on a framework architecture that can be extended to support future needs. The DE is an intelligent decision
support system and is the heart of provisioning in the HEPCloud. The DE uses information from several sources such as job queues, monitoring systems like graphite, finances, allocations at NERSC, etc. to make intelligent decisions on facility expansion. In November, the HEPCloud team successfully demonstrated the newly developed DE. As part of the demo, the DE expanded the HEPCloud facility by provisioning over 1400 resources in the AWS and at NERSC. Currently, we are focusing on adding more functionality to the DE and hardening it to run at production scale. Read more