#Slurm

2026-02-10

PSA for my #HPC cluster operators out there. A new CVE was announced for #MUNGE, a popular authentication mechanism used in #Slurm

github.com/dun/munge/security/

Snakemake Release Robotsnakemake@fediscience.org
2026-02-09

Beep, Beep - I am your friendly #Snakemake release announcement bot.

There is a new release of the Snakemake executor for #SLURM on #HPC systems. Its version now is 2.2.0!

Give us some time, and you will automatically find the plugin on #Bioconda and #Pypi.

If you want to discuss the release, you will find the maintainers here on Mastodon!
@rupdecat and @johanneskoester

If you discover any issues, please report them on github.com/snakemake/snakemake.

See github.com/snakemake/snakemake for details. Here is the header of the changelog:
𝑅𝑒𝑙𝑒𝑎𝑠𝑒 𝑁𝑜𝑡𝑒𝑠 (𝑝𝑜𝑠𝑠𝑖𝑏𝑙𝑦 𝑎𝑏𝑏𝑟𝑖𝑔𝑒𝑑):
𝐅𝐞𝐚𝐭𝐮𝐫𝐞𝐬

* job name prefix: github.com/snakemake/snakemake

𝐁𝐮𝐠 𝐅𝐢𝐱𝐞𝐬

* cancel on multicluster: github.com/snakemake/snakemake
* decreasing job query verbosity: github.com/snakemake/snakemake
* extracting job id from convoluted output, if necessary: github.com/snakemake/snakemake
* quoting piped substring with shlex: github.com/snakemake/snakemake

Snakemake HPC logo for Mastodon
2026-01-31

Tác giả tạo extension VS Code để quản lý SLURM công việc, giúp tiết kiệm thời gian không cần chuyển đổi giữa các terminal. Ứng dụng hiển thị công việc active (CPU/GPU/SSD), lịch sử, huỷ jobId, pin job, truy cập stdout/stderr và submit mới. Tác giả mong nhận phản hồi cải thiện. #VSCode #SLURM #devtools #côngnghệ #máythôngminh

reddit.com/r/SideProject/comme

Snakemake Release Robotsnakemake@fediscience.org
2026-01-20

Beep, Beep - I am your friendly #Snakemake release announcement bot.

There is a new release of the Snakemake executor for #SLURM on #HPC systems. Its version now is 2.1.0!

Give us some time, and you will automatically find the plugin on #Bioconda and #Pypi.

If you want to discuss the release, you will find the maintainers here on Mastodon!
@rupdecat and @johanneskoester

If you discover any issues, please report them on github.com/snakemake/snakemake.

See github.com/snakemake/snakemake for details. Here is the header of the changelog:
𝑅𝑒𝑙𝑒𝑎𝑠𝑒 𝑁𝑜𝑡𝑒𝑠 (𝑝𝑜𝑠𝑠𝑖𝑏𝑙𝑦 𝑎𝑏𝑏𝑟𝑖𝑔𝑒𝑑):
𝐅𝐞𝐚𝐭𝐮𝐫𝐞𝐬

* pass a shell script to sbatch rather than the command directly: github.com/snakemake/snakemake

𝐁𝐮𝐠 𝐅𝐢𝐱𝐞𝐬

* jobstep plugin with new functionality required: github.com/snakemake/snakemake
* quoting for SLURM partition: github.com/snakemake/snakemake
* status hang when no active jobs: github.com/snakemake/snakemake
* updated gpu model string - may contain a period, now: github.com/snakemake/snakemake

𝐃𝐨𝐜𝐮𝐦𝐞𝐧𝐭𝐚𝐭𝐢𝐨𝐧

* updated for passing commands as script: github.com/snakemake/snakemake

Snakemake HPC logo for Mastodon
2026-01-08

🚀 Introducing slurm-quota — open source CPU & GPU time quota accounting and enforcement for #Slurm clusters.

✔️ User & account quotas
✔️ Clear usage visibility
✔️ Lightweight
✔️ MIT license

🔗 rackslab.io/en/blog/slurm-quot

#HPC #AI #OpenSource #ClusterComputing #GPUs #Accounting #Quota

Benjamin Carr, Ph.D. 👨🏻‍💻🧬BenjaminHCCarr@hachyderm.io
2026-01-06

What Does #Nvidia’s Acquisition of #SchedMD Mean for #Slurm?
Slurm was developed at LLNL in the early 2000s to replace commercial workload management software for #HPC clusters and #supercomputers.
Addison Snell, the CEO of Intersect360, says the acquisition of SchedMD makes sense considering the emerging focus on developing #AI models to accelerate scientific discovery and engineering, and the need to integrate traditional HPC workloads and new AI ones.
hpcwire.com/2026/01/06/what-do

2025-12-19

Does anyone here use the #Slurm `nss_slurm` extension?

I see in Slurm's documentation an example of how to enable the extension, but I can't find any examples of the referenced /etc/nss_slurm.conf file anywhere...

The source code of the extension seems to indicate that it is very simple file - e.g.

```
NodeName=<nodename>
SlurmdSpoolDir=<dir>
```

but I just want an example to ensure that my assumptions are correct 😅

BuySellRam.comjimbsr
2025-12-18

reuters.com/business/nvidia-bu

Nvidia’s acquisition of SchedMD, the company behind Slurm, is a strategic move that goes far beyond GPUs.

Slurm (Simple Linux Utility for Resource Management) is the de facto open-source workload manager for large-scale GPU clusters, widely used in supercomputing centers, AI labs, hyperscalers, and cloud GPU operators. It plays a critical role in ...

2025-12-16

TODO play with #SLURM before #Nvidia might ruin it. I doubt they will but never know

Benjamin Carr, Ph.D. 👨🏻‍💻🧬BenjaminHCCarr@hachyderm.io
2025-12-16

#Nvidia pledges more openness as it slurps up #Slurm
The chip giant revealed yesterday that it had acquired #SchedMD, the key developer behind Slurm, which Nvidia described in a statement as "an #opensource workload management system for high-performance computing (#HPC) and #AI."
"Nvidia insisted it will also support "a diverse hardware and software ecosystem, so customers can run heterogeneous clusters with the latest Slurm innovations."
theregister.com/2025/12/16/nvi

Still makes me quite nervous!

WhisprNewsWhisprNews
2025-12-16

🚀 fortalece su ecosistema de e con la adquisición de , el creador de . La jugada busca potenciar la investigación de código abierto y la innovación en .

Without any judgement: Are there other Open Source alternatives for #SLURM for #HPC ?

blogs.nvidia.com/blog/nvidia-a

2025-12-16

@jannem
> "of #slurm , the (only) Open Source cluster scheduler"
The claim of its uniqueness seems to be an exaggeration to me.
From top of my head I could name Mesos/Marathon and Ganglia, which are truly FOSS.

Eli Roberson (he/him)thatdnaguy@genomic.social
2025-12-16

RE: mstdn.social/@TechCrunch/11572

This is pretty awful. The reason for anti-monopoly regulations are to prevent one bully from taking over the school yard. The bully (#nvidia) buying #slurm is flatly bad. It will not improve code quality, and will be turned into a paid product.

Another situation where forking into a public project (gurm? good slurm?) may be the best bet. It's only a matter of time.

TechCrunch | Startup and Technology Newstechcrunch.com@web.brid.gy
2025-12-15
Mark McCahillmpm@mstdn.social
2025-12-15

The sound you hear is coming from #HPC data centers as jaws drop and the research community gasps at the news that #NVIDIA acquired SchedMD - the #SLURM developer.

SLURM is very popular for job scheduling on high performance compute clusters. Let us hope this will at least keep #CoPilot from being bolted onto SLURM. After a recent experience with CoPilot on GitHub, I’m questioning some of my life choices.

blogs.nvidia.com/blog/nvidia-a

Drew Johnson :cascadia:Possiblydrew@pnw.zone
2025-12-15

#nvidia to purchase developer of #slurm

Arguably the de-facto open-source #HPC scheduler / queue / job management tool

blogs.nvidia.com/blog/nvidia-a

2025-12-15

RE: social.heise.de/@heiseonlineen

In any kind of normal timeline this would have been stopped by regulators. Instead we let a single company become the sole gatekeeper for everything in #HPC .

SchedMD are the makers of #slurm , the (only) Open Source cluster scheduler, and used by a large majority of cluster computers. Nvidia bought Mellanox a few years ago, the maker of Infiniband, the main network technology for clusters.

Thomas Schäfertschaefer@ipv6.social
2025-12-15
2025-12-15

Slurm-web and RacksDB now officially support SLES 15 & 16 and openSUSE, with packages and complete documentation.

🔗 Read on more in blog post: rackslab.io/en/blog/sles-opens

#SLES #openSUSE #HPC #Slurm #OpenSource

Client Info

Server: https://mastodon.social
Version: 2025.07
Repository: https://github.com/cyevgeniy/lmst