NBI-Slurm
view release on metacpan or search on metacpan
arxiv/paper_arxiv.tex view on Meta::CPAN
generate tool specific wrappers for (bioinformatic) tools and (c) an
energy-aware scheduling mode --- ``eco mode'' --- that automatically
defers flexible jobs to off-peak periods, helping research institutions
reduce their computational carbon footprint without requiring users to
manually plan submission times.
\section{Statement of Need}\label{statement-of-need}
HPC clusters are indispensable in modern research, particularly in the
life sciences where large-scale sequence analyses, genome assemblies,
and statistical models demand resources beyond a desktop workstation.
SLURM has become the dominant workload manager in this space
\citep{slurm_adoption}, yet its interface presents a steep learning
curve. Users must learn a verbose \texttt{sbatch} scripting syntax,
understand resource unit conventions (memory in megabytes, time in
\texttt{D-HH:MM:SS} format), manage job dependencies manually, and
repeat boilerplate directives across every submission script.
Workflow managers such as Snakemake \citep{molder2021snakemake} and
Nextflow \citep{di2017nextflow} address this at the pipeline level by
abstracting SLURM as an execution backend, but they require users to
( run in 1.390 second using v1.01-cache-2.11-cpan-39bf76dae61 )