PSA for my #HPC cluster operators out there. A new CVE was announced for #MUNGE, a popular authentication mechanism used in #Slurm
https://github.com/dun/munge/security/advisories/GHSA-r9cr-jf4v-75gh
PSA for my #HPC cluster operators out there. A new CVE was announced for #MUNGE, a popular authentication mechanism used in #Slurm
https://github.com/dun/munge/security/advisories/GHSA-r9cr-jf4v-75gh
Beep, Beep - I am your friendly #Snakemake release announcement bot.
There is a new release of the Snakemake executor for #SLURM on #HPC systems. Its version now is 2.2.0!
Give us some time, and you will automatically find the plugin on #Bioconda and #Pypi.
If you want to discuss the release, you will find the maintainers here on Mastodon!
@rupdecat and @johanneskoester
If you discover any issues, please report them on https://github.com/snakemake/snakemake-executor-plugin-slurm/issues.
See https://github.com/snakemake/snakemake-executor-plugin-slurm/releases/tag/v2.2.0 for details. Here is the header of the changelog:
𝑅𝑒𝑙𝑒𝑎𝑠𝑒 𝑁𝑜𝑡𝑒𝑠 (𝑝𝑜𝑠𝑠𝑖𝑏𝑙𝑦 𝑎𝑏𝑏𝑟𝑖𝑔𝑒𝑑):
𝐅𝐞𝐚𝐭𝐮𝐫𝐞𝐬
* job name prefix: https://github.com/snakemake/snakemake-executor-plugin-slurm/issues/408
𝐁𝐮𝐠 𝐅𝐢𝐱𝐞𝐬
* cancel on multicluster: https://github.com/snakemake/snakemake-executor-plugin-slurm/issues/401
* decreasing job query verbosity: https://github.com/snakemake/snakemake-executor-plugin-slurm/issues/405
* extracting job id from convoluted output, if necessary: https://github.com/snakemake/snakemake-executor-plugin-slurm/issues/375
* quoting piped substring with shlex: https://github.com/snakemake/snakemake-executor-plugin-slurm/issues/402
Tác giả tạo extension VS Code để quản lý SLURM công việc, giúp tiết kiệm thời gian không cần chuyển đổi giữa các terminal. Ứng dụng hiển thị công việc active (CPU/GPU/SSD), lịch sử, huỷ jobId, pin job, truy cập stdout/stderr và submit mới. Tác giả mong nhận phản hồi cải thiện. #VSCode #SLURM #devtools #côngnghệ #máythôngminh
Beep, Beep - I am your friendly #Snakemake release announcement bot.
There is a new release of the Snakemake executor for #SLURM on #HPC systems. Its version now is 2.1.0!
Give us some time, and you will automatically find the plugin on #Bioconda and #Pypi.
If you want to discuss the release, you will find the maintainers here on Mastodon!
@rupdecat and @johanneskoester
If you discover any issues, please report them on https://github.com/snakemake/snakemake-executor-plugin-slurm/issues.
See https://github.com/snakemake/snakemake-executor-plugin-slurm/releases/tag/v2.1.0 for details. Here is the header of the changelog:
𝑅𝑒𝑙𝑒𝑎𝑠𝑒 𝑁𝑜𝑡𝑒𝑠 (𝑝𝑜𝑠𝑠𝑖𝑏𝑙𝑦 𝑎𝑏𝑏𝑟𝑖𝑔𝑒𝑑):
𝐅𝐞𝐚𝐭𝐮𝐫𝐞𝐬
* pass a shell script to sbatch rather than the command directly: https://github.com/snakemake/snakemake-executor-plugin-slurm/issues/380
𝐁𝐮𝐠 𝐅𝐢𝐱𝐞𝐬
* jobstep plugin with new functionality required: https://github.com/snakemake/snakemake-executor-plugin-slurm/issues/393
* quoting for SLURM partition: https://github.com/snakemake/snakemake-executor-plugin-slurm/issues/399
* status hang when no active jobs: https://github.com/snakemake/snakemake-executor-plugin-slurm/issues/396
* updated gpu model string - may contain a period, now: https://github.com/snakemake/snakemake-executor-plugin-slurm/issues/390
𝐃𝐨𝐜𝐮𝐦𝐞𝐧𝐭𝐚𝐭𝐢𝐨𝐧
* updated for passing commands as script: https://github.com/snakemake/snakemake-executor-plugin-slurm/issues/392
🚀 Introducing slurm-quota — open source CPU & GPU time quota accounting and enforcement for #Slurm clusters.
✔️ User & account quotas
✔️ Clear usage visibility
✔️ Lightweight
✔️ MIT license
🔗 https://rackslab.io/en/blog/slurm-quota-announce/
#HPC #AI #OpenSource #ClusterComputing #GPUs #Accounting #Quota
What Does #Nvidia’s Acquisition of #SchedMD Mean for #Slurm?
Slurm was developed at LLNL in the early 2000s to replace commercial workload management software for #HPC clusters and #supercomputers.
Addison Snell, the CEO of Intersect360, says the acquisition of SchedMD makes sense considering the emerging focus on developing #AI models to accelerate scientific discovery and engineering, and the need to integrate traditional HPC workloads and new AI ones.
https://www.hpcwire.com/2026/01/06/what-does-nvidias-acquisition-of-schedmd-mean-for-slurm/
Does anyone here use the #Slurm `nss_slurm` extension?
I see in Slurm's documentation an example of how to enable the extension, but I can't find any examples of the referenced /etc/nss_slurm.conf file anywhere...
The source code of the extension seems to indicate that it is very simple file - e.g.
```
NodeName=<nodename>
SlurmdSpoolDir=<dir>
```
but I just want an example to ensure that my assumptions are correct 😅
Nvidia’s acquisition of SchedMD, the company behind Slurm, is a strategic move that goes far beyond GPUs.
Slurm (Simple Linux Utility for Resource Management) is the de facto open-source workload manager for large-scale GPU clusters, widely used in supercomputing centers, AI labs, hyperscalers, and cloud GPU operators. It plays a critical role in ...
#NVIDIA #AIInfrastructure #OpenSource #Slurm #HPC #GPUs #AITraining #CloudComputing #tech #DataCenters
#Nvidia pledges more openness as it slurps up #Slurm
The chip giant revealed yesterday that it had acquired #SchedMD, the key developer behind Slurm, which Nvidia described in a statement as "an #opensource workload management system for high-performance computing (#HPC) and #AI."
"Nvidia insisted it will also support "a diverse hardware and software ecosystem, so customers can run heterogeneous clusters with the latest Slurm innovations."
https://www.theregister.com/2025/12/16/nvidia_slurm_nemotron/
Still makes me quite nervous!
RE: https://mstdn.social/@TechCrunch/115725841067428721
This is pretty awful. The reason for anti-monopoly regulations are to prevent one bully from taking over the school yard. The bully (#nvidia) buying #slurm is flatly bad. It will not improve code quality, and will be turned into a paid product.
Another situation where forking into a public project (gurm? good slurm?) may be the best bet. It's only a matter of time.
Nvidia bulks up open source offerings with an acquisition and new open AI models
The sound you hear is coming from #HPC data centers as jaws drop and the research community gasps at the news that #NVIDIA acquired SchedMD - the #SLURM developer.
SLURM is very popular for job scheduling on high performance compute clusters. Let us hope this will at least keep #CoPilot from being bolted onto SLURM. After a recent experience with CoPilot on GitHub, I’m questioning some of my life choices.
RE: https://social.heise.de/@heiseonlineenglish/115725340534744039
In any kind of normal timeline this would have been stopped by regulators. Instead we let a single company become the sole gatekeeper for everything in #HPC .
SchedMD are the makers of #slurm , the (only) Open Source cluster scheduler, and used by a large majority of cluster computers. Nvidia bought Mellanox a few years ago, the maker of Infiniband, the main network technology for clusters.
Slurm-web and RacksDB now officially support SLES 15 & 16 and openSUSE, with packages and complete documentation.
🔗 Read on more in blog post: https://rackslab.io/en/blog/sles-opensuse-support/