Carefully tailoring your resource requests will increase your job throughput
With partitionable slots now the norm on GPGrid, it’s important to have a good understanding of resource requirements. The memory, disk, and maximum runtime available in free job slots changes as users submit jobs. As a result, there may be free slots that have less resources available to them than the defaults, as they are leftovers from the way the cluster was partitioned at a given moment. There’s nothing inherently wrong with these free slots, and any job that can fit into those requirements can run without problem.
When you submit jobs via jobsub_submit, the default memory, disk, and job runtime requests are 2 GB, 35 GB, and 23 hours 40 minutes, respectively. Unless you override any or all of these options with the –memory, –disk, and –expected-lifetime options, you’ll get the defaults. But if the default is bigger than you need, your job can’t take advantage of the smaller-than-usual free resources that we just described.
As an analogy, imagine it’s lunchtime and you walk into a restaurant. If you ask for a table (the default), you may well be seated at a table for four. That is fine when the restaurant isn’t busy. But if they’re busy (lots of red dots) and you insist on a table, you may have to wait. Unless you’re willing to sit at the bar.
The same holds true in a computing environment. If you have a workflow that consistently uses fewer resources than the default requests, it makes sense to lower your resource request so that you can take advantage of small-sized slots that are unusable by the default request, just like sitting at the bar. By doing so your jobs will start faster.
In addition to the slots that have a smaller than usual amount of memory and/or disk assigned to them, there can be free resources available both on GPGrid and OSG that don’t satisfy the default 24-hour run time request. The vast majority of jobs submitted by the FIFE experiments take less than twelve hours. By lowering your expected runtime via the –expected-lifetime option to more closely match your actual runtime, you can again get access to resources that aren’t available to jobs submitted with the default request. If you do have longer jobs that take closer to 24 hours or more, a way to still utilize resources with shorter allowed run times might be to divide a large job into smaller pieces that can run at the same time. You could then lower your lifetime request to fit within resources available.
Consider the analogy of eight people going out to eat who don’t care if they all sit together or not. Asking for a table for eight when the restaurant is busy may result in a long wait.
Suppose you have a 8-core 16 GB glidein running on a worker node with 23 hours of time remaining for jobs. It is running one job that has requested one CPU and 2 GB of memory. We could fit another 7 such jobs into this glidein, but jobs that request the default lifetime (24h) can’t run on this machine, because the glidein only has 23 hours of time remaining. This can lead to significant overall inefficiency, as this very thing could be playing out on multiple worker nodes at the same time.
Please contact the FIFE group at email@example.com if you need help determining what resources your job requires.
You shouldn’t ask for less than you need, but tailoring your request to match your resource needs as closely as possible will result in higher throughput for you and for your collaborators, as it will make the cluster as efficient as possible. Faster job completion enables faster analysis completion, faster publication, and more time to enjoy your lunch.
– Ken Herner & Katherine Lato