#rocm

2026-02-10

Comrades, all of #ROCm/HIP 7.1 has just made it into #Guix proper, thanks to the hard work of David Elsing—we’re almost done with this milestone 👇
codeberg.org/guix/guix/milesto

Just 2 years after AMD contributed the initial packages to the Guix-HPC channel:
hpc.guix.info/blog/2024/01/hip

#HPC

Torsten Brongerbronger
2026-02-08

The failure of the community to properly support GPUs is disappointing. It is slowly improving, but very slowly and from a poor starting point.

It has never been a good idea to cherish a monopoly.

Florian Haasxahteiwi
2026-02-07

Do I upgrade my daily driver (which is currently on Noble) to a non-LTS version now so I can run containers with , or do I wait until April when the next LTS comes out?

2026-02-06

Triton, Flash-attension, Sage-attension и bitsandbytes с Rocm7 в Windows

В конце января 2026 вышел triton-windows 3.6.0.post25 , который позволяет использовать flash-attention , sage-attention (v1) и другие библиотеки, использующие Triton, на картах AMD с поддержкой rocWMMA в Windows. Также, несмотря на то, что в официальном репозитории bitsandbytes еще не приняли PR для поддержки ROCm 7, его все же можно собрать, внеся небольшие изменения в код. Эти изменения я уже сделал в своем форке. В этой статье я расскажу, как установить все это себе, а также для примера запустим пару тестов в ComfyUI, в том числе со свежей LTX-2, и сделаем Qlora адаптер для модели Gemma 3.

habr.com/ru/articles/987672/

#triton #amd #rx7900 #sageattention #flashattention #bitsandbytes #rocm #rocm7 #comfyui #ltx2

2026-02-02

̶К̶у̶п̶а̶н̶и̶е̶ тестирование «красного» ̶к̶о̶н̶я̶ представителя GPU от АМД с приставкой ИИ — RADEON AI PRO R9700

Тестировать друг за другом видеокарты от NVIDIA надоедает, благо разница в последнем поколении только в мощностях процессоров семейства Blackwell, объеме памяти и ширине шины. А вот посмотреть, что предлагают конкуренты, а тем более громко называя это «ИИ», уже интересней. Мы проверили Radeon AI PRO R9700 с 32 Гб памяти на реальных задачах: LLM, генерация графики и видео, 3D-рендеринг, и сравнили с NVIDIA.

habr.com/ru/companies/hostkey/

#hostkey #Radeon_AI_PRO_R9700 #ROCm #инференс_LLM #генеративный_ИИ #видеокарта #NVIDIA_RTX #Ollama #RDNA_4 #GDDR6

this was sort of interesting

The “Famous” Claude Code Has Managed to Port NVIDIA’s CUDA Backend to ROCm in Just 30 Minutes, and Folks Are Calling It the End of the CUDA Moat wccftech

one of these days it will actually get it letter perfect but i suspect it may be a year or two for more homogenous ai and cuda codebase #rocm

Rost Glukhovrosgluk
2026-02-01
2026-01-28
One year ago today I opened my Pixelfed profile 🎉

Time for a short retrospective of how it all began.

Late 2024, 2 a.m.: I was manually integrating peaks from chromatograms in Chromeleon when I thought: why can’t an AI do this?
The idea didn’t go anywhere, but I started exploring AI frameworks and ended up with image generation. ROCm on Debian, EasyDiffusion, and then Pixelfed.

Later Debian and ROCm drifted apart, so I posted some real-life photos. With an Ubuntu chroot, everything ran cleanly again, even AUTOMATIC1111. SD 1.5 was my standard for a long time. Early this year I tried FLUX in ComfyUI but had to drop it: RDNA2 + no FP8 + incomplete HIP → FLUX-VAE not practical. Mid-January I finally fixed the NaNs in SDXL VAE in A1111.

Now I’m fully on ComfyUI, can render 1024×1024, and 512+ px no longer OOMs.

End of 2025, I used Pixelfed for IT/FOSS networking: the FOSS Advent Calendar. Posts were seen thanks to ActivityPub, and I even started my own dev blog xD

Thanks 💜 to everyone who follows me, especially my regular viewers and those I really exchange with.
Pixelfed remains my place to share, experiment, and learn.

1 year on Pixelfed, and it all started with peaks at 2 a.m.

tl;dr: Thanks so much to everyone who follows me, especially my regular viewers and those I really exchange with, you are awesome (in Austrian slang: Ihr seid ur leiwand 💜)

#Pixelfed #Fediverse #OpenSource #FOSS #Anniversary #1Year #Celebration #Birthday #Milestone #BirthdayCake #Fireworks #Festive #Colorful #AI #AIArt #GenerativeArt #ComfyUI #SDXL #StableDiffusion #ROCm #Linux #ThankYou #AiCommunity
2026-01-28

🤬

decided to fucking crash the computer and let me lose all afternoon progress.

I'll never recommend AMD shit for , ever again, until they gift me a fucking server to vent out.

👉 It's either or for serious business.

2026-01-28

⚠️ Lỗi biên dịch llama.cpp trên nền tảng GFX1031 với ROCm - dù đã sử dụng command chi tiết, cộng đồng LocalLLM và LMS vẫn chưa tìm ra giải pháp. Bài toán khó cho developer khi triển khai AI/ML trên phần cứng AMD!
#ROCm #LLM #LocalLLM #LMS #MachineLearning #AI #VietAI #LocalLLaMA #HọcMáy #TríTuệNhânTạo

reddit.com/r/LocalLLaMA/commen

2026-01-26

So... I managed to use on an 8845HS in using :

1. Download JAN
2. Using Flatseal, enable "Graphics acceleration", env variable "HSA_OVERRIDE_GFX_VERSION=11.0.0"
3. Download lemonade-sdk/llamacpp-rocm latest release for "Ubuntu gfx110X".
4. Go into the backends directory of Jan, decompress the file.
5. Ensure the path is `bXXXX/linux-rocm7-gfx110x-x64/build/bin` with all libraries and add execute bit (chmod +x ./)
6. Restart Jan, profit.

2026-01-26

**Vulkan vs ROCm trên RX 9070 XT (RDNA4): Nhanh hơn 9%, tiết kiệm điện 50%!**

Kết quả benchmark với Ollama 0.15.1 và qwen3-coder:30b:
- **Tốc độ:** Vulkan 52.5 tokens/s vs ROCm 48.2 tokens/s (+8.9%)
- **Điện năng:** Vulkan 68W vs ROCm 149W (-54%)
- **VRAM:** Vulkan 16.1GB vs ROCm 15.8GB (+2%)

🔥 **Kết luận:** Vulkan không chỉ nhanh hơn mà còn tiết kiệm điện đáng kể trên RDNA4!

#Vulkan #ROCm #RDNA4 #AMD #Ollama #Benchmark #TechNews #CôngNghệ #GPU

reddit.com/r/ollama/comments/1

2026-01-25

Towards Automated Kernel Generation in the Era of LLMs

#CUDA #Triton #ROCm #LLM

hgpu.org/?p=30511

Valentin Churavy :julia:vchuravy@fosstodon.org
2026-01-25

#JuliaCon 2026 will have a minisymposium on "Julia, GPUs, and Accelerators" hosted by Julian Samaroo, Tim Besard, and @vchuravy find out more on pretalx.com/juliacon-2026/talk and submit your talks through the Call for Proposals juliacon.org/2026/cfp/ until Feb 28th!

#julialang #cuda #rocm #opencl #gpu

The JuliaGPU community has been a strong presence at JuliaCon for many years, and continues to be a strong foundation of Julia's overall computing ecosystem. In 2026, we propose to organize a minisymposium specifically focused on the usage and programming of GPUs (and other accelerators) in Julia. There is some overlap with an HPC minisymposium, however we intend to focus our track on very GPU-specific content or low-level details that make JuliaGPU tick. Additionally, material relating to non-GPU devices (such as TPUs, APUs, IPUs, etc.) are very welcome!
2026-01-24

AMD dropped ROCm support for Radeon VII (gfx906) at v6.2.4. I wanted to run local LLMs on it anyway.

Found a community-maintained image on r/LocalLLaMA that packages ROCm 7.1 with llama.cpp for gfx906. One docker pull later, I had llama.cpp + Ollama + Qdrant + Open WebUI running on "unsupported" hardware.

Docker Hub is a library of solved problems.

Full story: bit.ly/4pTk3zf

#Docker #DockerCaptain #LocalLLM #AMD #ROCm #OpenSource #SelfHosted #MachineLearning

金のニワトリ (@gosrum)

ROCm 7.2가 공개되었습니다. (링크 포함) AMD의 Radeon Open Compute 스택 업데이트로, Radeon/Ryzen 기반의 네이티브 리눅스 호환성 관련 문서가 안내되어 있어 GPU 컴퓨팅 및 머신러닝 개발 환경에 중요한 드라이버/플랫폼 업데이트입니다.

x.com/gosrum/status/2014315158

#rocm #amd #gpu #ml

2026-01-21
If you can run FLUX and SDXL locally, you really don’t need cloud AI services anymore.
No subscriptions, no prompt logging, no rate limits, just full control.

And no, you don’t need an RTX 5090 or a Radeon AI PRO R9700 (I’d love to have one though xD).
Every image here was rendered on an AMD RX 6700 XT, a mid-range GPU from 2021, not even officially supported by ROCm.

Local models. Local compute. Local freedom.

#SDXL #StableDiffusion #LocalAI #GenerativeArt #OpenSource #FOSS #PrivacyFirst #NoCloud #AMD #RX6700XT #ROCm #Linux #AIArt #DiffusionModels #CreativeCoding #MidrangeGPU #SelfHosted #Decentralized
Alexey Skobkinskobkin@gts.skobk.in
2026-01-21

✅ Заставить полноценный FLUX1.dev работать без OOM на картинках 1024x1024
✅ Также успешно запустить квантизированный FLUX под ROCm несмотря на то, что у Invoke в образе с ROCm поломана bitsandbytes
✅ Смириться, что генерировать в нормальном разрешении и с нормальными параметрами - это меееедленно.

#log #GenAI #Invoke #FLUX #server #AMD #ROCm #troubleshooting #memory #OOM

2026-01-20

vLLM 0.13 chạy thành công trên Windows 11 với RX 7900 XT qua ROCm (TheRock), đạt ~3.4 tok/s dùng TRITON_ATTN. ROCM_ATTN crash, TRITON_ATTN ổn định hơn. Cài đặt còn thủ công, nhiều phần thiếu biên dịch → dùng fallback Python. Chi tiết trong log. #vLLM #ROCm #AI #Triton #LLM #WindowsML #vLLM0.13 #AIInference #TínhToánLocal #MôHìnhNgônNgữ

reddit.com/r/LocalLLaMA/commen

2026-01-19

Neuer Blogartikel: Ubuntu-Chroot auf Debian für volle ROCm-GPU-Beschleunigung in KI-Workflows. Inklusive Automatic1111, ComfyUI und Text-Generierung:

404lifenotfound.freeddns.org/p

#Linux #ROCm #AMD #GPU #AI #KI #StableDiffusion #ComfyUI #PyTorch #Virtualization #Chroot #Debian #Ubuntu #Developer #Tech

Client Info

Server: https://mastodon.social
Version: 2025.07
Repository: https://github.com/cyevgeniy/lmst