#ROCm

2026-02-17
Urban skate sequence, rear view tracking shot.

Two variations of the same scene, followed by the final workflow setup in ComfyUI.

65 frames rendered locally, iterative prompt refinement, stabilized motion and clean landing.

The animated WebP was downscaled using ImageMagick before upload, as it otherwise stuttered when played in the browser.

#Skateboarding #UrbanSkate #AIVideo #GenerativeArt #WAN21 #Diffusion #LocalAI #OpenSourceAI #ROCm #CreativeWorkflow #AIProcess #Pixelfed #ComfyUI #ImageMagick #WebP
2026-02-17
A cat jump is a much better “hello world” than a cyberpunk car. 🐈✨

Two jumps, generated locally with WAN 2.1 (1.3B, fp16) in ComfyUI.
Rendered on my AMD RX 6700 XT via ROCm.

Everything runs locally, no cloud processing, no external APIs.
Just privacy-friendly, open tools and feline physics. 💜

#cat #aicats #CatVideo #AIVideo #TextToVideo #ComfyUI #WAN21 #LocalAI #OpenSource #Privacy #ROCm #AIArt #Fediverse #Cute #linux #foss
2026-02-17
🎉 Now videos are alive! After finally getting WAN 2.1 running on my RX 6700 XT via ROCm and ComfyUI, even complex prompts can be turned into animated WebPs locally!

These animated WebP were generated locally using ComfyUI and the WAN 2.1 T2V 1.3B (fp16) model.

Model Stack:
- wan2.1_t2v_1.3B_fp16
- umt5_xxl_fp8_e4m3fn_scaled (Text Encoder)
- wan_2.1_vae
- clip_vision_h

The prompt is first converted into embeddings by the UMT5 encoder.
The WAN video model then generates multiple frames using latent diffusion (noise → iterative refinement), ensuring temporal coherence between frames.
The VAE decodes the latent frames into images, exported as an animated WebP.

Prompt execution time: depends on scene complexity, from 521.62 seconds (~8.7 minutes) up to 17 minutes 26 seconds for more complex prompts.

Rendered locally via ROCm on my AMD RX 6700 XT (12GB VRAM).
No cloud. Pure local inference.

#ComfyUI #WAN21 #ROCm #AMD #LocalAI #FOSS #VideoAI #AIvideo #AIGenerated #MachineLearning #DeepLearning #DiffusionModels #TextToVideo #AIArt #CreativeAI #LocalInference #VideoGeneration
2026-02-15

Execution-Centric Characterization of FP8 Matrix Cores, Asynchronous Execution, and Structured Sparsity on AMD MI300A

#AMD #HIP #ROCm

hgpu.org/?p=30572

2026-02-14

Day 2 of poking at DaVinci Resolve on the AMD Ryzen AI 9 HX 370 #Linux #ROCm

interfacinglinux.com/community

2026-02-10

Comrades, all of #ROCm/HIP 7.1 has just made it into #Guix proper, thanks to the hard work of David Elsing—we’re almost done with this milestone 👇
codeberg.org/guix/guix/milesto

Just 2 years after AMD contributed the initial packages to the Guix-HPC channel:
hpc.guix.info/blog/2024/01/hip

#HPC

Torsten Brongerbronger
2026-02-08

The failure of the community to properly support GPUs is disappointing. It is slowly improving, but very slowly and from a poor starting point.

It has never been a good idea to cherish a monopoly.

Florian Haasxahteiwi
2026-02-07

Do I upgrade my daily driver (which is currently on Noble) to a non-LTS version now so I can run containers with , or do I wait until April when the next LTS comes out?

2026-02-06

Triton, Flash-attension, Sage-attension и bitsandbytes с Rocm7 в Windows

В конце января 2026 вышел triton-windows 3.6.0.post25 , который позволяет использовать flash-attention , sage-attention (v1) и другие библиотеки, использующие Triton, на картах AMD с поддержкой rocWMMA в Windows. Также, несмотря на то, что в официальном репозитории bitsandbytes еще не приняли PR для поддержки ROCm 7, его все же можно собрать, внеся небольшие изменения в код. Эти изменения я уже сделал в своем форке. В этой статье я расскажу, как установить все это себе, а также для примера запустим пару тестов в ComfyUI, в том числе со свежей LTX-2, и сделаем Qlora адаптер для модели Gemma 3.

habr.com/ru/articles/987672/

#triton #amd #rx7900 #sageattention #flashattention #bitsandbytes #rocm #rocm7 #comfyui #ltx2

2026-02-02

̶К̶у̶п̶а̶н̶и̶е̶ тестирование «красного» ̶к̶о̶н̶я̶ представителя GPU от АМД с приставкой ИИ — RADEON AI PRO R9700

Тестировать друг за другом видеокарты от NVIDIA надоедает, благо разница в последнем поколении только в мощностях процессоров семейства Blackwell, объеме памяти и ширине шины. А вот посмотреть, что предлагают конкуренты, а тем более громко называя это «ИИ», уже интересней. Мы проверили Radeon AI PRO R9700 с 32 Гб памяти на реальных задачах: LLM, генерация графики и видео, 3D-рендеринг, и сравнили с NVIDIA.

habr.com/ru/companies/hostkey/

#hostkey #Radeon_AI_PRO_R9700 #ROCm #инференс_LLM #генеративный_ИИ #видеокарта #NVIDIA_RTX #Ollama #RDNA_4 #GDDR6

this was sort of interesting

The “Famous” Claude Code Has Managed to Port NVIDIA’s CUDA Backend to ROCm in Just 30 Minutes, and Folks Are Calling It the End of the CUDA Moat wccftech

one of these days it will actually get it letter perfect but i suspect it may be a year or two for more homogenous ai and cuda codebase #rocm

Rost Glukhovrosgluk
2026-02-01
2026-01-28
One year ago today I opened my Pixelfed profile 🎉

Time for a short retrospective of how it all began.

Late 2024, 2 a.m.: I was manually integrating peaks from chromatograms in Chromeleon when I thought: why can’t an AI do this?
The idea didn’t go anywhere, but I started exploring AI frameworks and ended up with image generation. ROCm on Debian, EasyDiffusion, and then Pixelfed.

Later Debian and ROCm drifted apart, so I posted some real-life photos. With an Ubuntu chroot, everything ran cleanly again, even AUTOMATIC1111. SD 1.5 was my standard for a long time. Early this year I tried FLUX in ComfyUI but had to drop it: RDNA2 + no FP8 + incomplete HIP → FLUX-VAE not practical. Mid-January I finally fixed the NaNs in SDXL VAE in A1111.

Now I’m fully on ComfyUI, can render 1024×1024, and 512+ px no longer OOMs.

End of 2025, I used Pixelfed for IT/FOSS networking: the FOSS Advent Calendar. Posts were seen thanks to ActivityPub, and I even started my own dev blog xD

Thanks 💜 to everyone who follows me, especially my regular viewers and those I really exchange with.
Pixelfed remains my place to share, experiment, and learn.

1 year on Pixelfed, and it all started with peaks at 2 a.m.

tl;dr: Thanks so much to everyone who follows me, especially my regular viewers and those I really exchange with, you are awesome (in Austrian slang: Ihr seid ur leiwand 💜)

#Pixelfed #Fediverse #OpenSource #FOSS #Anniversary #1Year #Celebration #Birthday #Milestone #BirthdayCake #Fireworks #Festive #Colorful #AI #AIArt #GenerativeArt #ComfyUI #SDXL #StableDiffusion #ROCm #Linux #ThankYou #AiCommunity
2026-01-28

🤬

decided to fucking crash the computer and let me lose all afternoon progress.

I'll never recommend AMD shit for , ever again, until they gift me a fucking server to vent out.

👉 It's either or for serious business.

2026-01-28

⚠️ Lỗi biên dịch llama.cpp trên nền tảng GFX1031 với ROCm - dù đã sử dụng command chi tiết, cộng đồng LocalLLM và LMS vẫn chưa tìm ra giải pháp. Bài toán khó cho developer khi triển khai AI/ML trên phần cứng AMD!
#ROCm #LLM #LocalLLM #LMS #MachineLearning #AI #VietAI #LocalLLaMA #HọcMáy #TríTuệNhânTạo

reddit.com/r/LocalLLaMA/commen

2026-01-26

So... I managed to use on an 8845HS in using :

1. Download JAN
2. Using Flatseal, enable "Graphics acceleration", env variable "HSA_OVERRIDE_GFX_VERSION=11.0.0"
3. Download lemonade-sdk/llamacpp-rocm latest release for "Ubuntu gfx110X".
4. Go into the backends directory of Jan, decompress the file.
5. Ensure the path is `bXXXX/linux-rocm7-gfx110x-x64/build/bin` with all libraries and add execute bit (chmod +x ./)
6. Restart Jan, profit.

2026-01-26

**Vulkan vs ROCm trên RX 9070 XT (RDNA4): Nhanh hơn 9%, tiết kiệm điện 50%!**

Kết quả benchmark với Ollama 0.15.1 và qwen3-coder:30b:
- **Tốc độ:** Vulkan 52.5 tokens/s vs ROCm 48.2 tokens/s (+8.9%)
- **Điện năng:** Vulkan 68W vs ROCm 149W (-54%)
- **VRAM:** Vulkan 16.1GB vs ROCm 15.8GB (+2%)

🔥 **Kết luận:** Vulkan không chỉ nhanh hơn mà còn tiết kiệm điện đáng kể trên RDNA4!

#Vulkan #ROCm #RDNA4 #AMD #Ollama #Benchmark #TechNews #CôngNghệ #GPU

reddit.com/r/ollama/comments/1

2026-01-25

Towards Automated Kernel Generation in the Era of LLMs

#CUDA #Triton #ROCm #LLM

hgpu.org/?p=30511

Valentin Churavy :julia:vchuravy@fosstodon.org
2026-01-25

#JuliaCon 2026 will have a minisymposium on "Julia, GPUs, and Accelerators" hosted by Julian Samaroo, Tim Besard, and @vchuravy find out more on pretalx.com/juliacon-2026/talk and submit your talks through the Call for Proposals juliacon.org/2026/cfp/ until Feb 28th!

#julialang #cuda #rocm #opencl #gpu

The JuliaGPU community has been a strong presence at JuliaCon for many years, and continues to be a strong foundation of Julia's overall computing ecosystem. In 2026, we propose to organize a minisymposium specifically focused on the usage and programming of GPUs (and other accelerators) in Julia. There is some overlap with an HPC minisymposium, however we intend to focus our track on very GPU-specific content or low-level details that make JuliaGPU tick. Additionally, material relating to non-GPU devices (such as TPUs, APUs, IPUs, etc.) are very welcome!
2026-01-24

AMD dropped ROCm support for Radeon VII (gfx906) at v6.2.4. I wanted to run local LLMs on it anyway.

Found a community-maintained image on r/LocalLLaMA that packages ROCm 7.1 with llama.cpp for gfx906. One docker pull later, I had llama.cpp + Ollama + Qdrant + Open WebUI running on "unsupported" hardware.

Docker Hub is a library of solved problems.

Full story: bit.ly/4pTk3zf

#Docker #DockerCaptain #LocalLLM #AMD #ROCm #OpenSource #SelfHosted #MachineLearning

Client Info

Server: https://mastodon.social
Version: 2025.07
Repository: https://github.com/cyevgeniy/lmst