#RAIDZ

Hong Hua Data Recovery 鴻華資料救援 數據恢復honghuadatarecover@misskey.jayhsustudio.com
2025-11-05

💾 QNAP QuTS hero ZFS RAID 正式支援!比 Synology Btrfs 更強的資料保護? 🔒

在 NAS 儲存領域,資料完整性 一直是企業與專業使用者最關心的議題。
QNAP 推出的 QuTS hero 作業系統,採用企業級的 ZFS 檔案系統,並全面支援 ZFS RAID-Z,為 NAS 使用者帶來更強大的資料保護能力。
⚙️

但 ZFS 真的比 Synology 的 Btrfs 更強嗎?
兩者各有什麼優勢與劣勢?
本文將完整解析 QNAP QuTS hero ZFS RAID 與 Synology Btrfs 的技術差異、適用場景,以及如何選擇最適合您的 NAS 解決方案。
🚀

👉 閱讀完整分析:
🔗 https://2025.data-recover.com.tw/news/QNAP-QuTS-hero-ZFS-RAID%E6%94%AF%E6%8F%B4-vs-Synology-Btrfs%E5%AE%8C%E6%95%B4%E5%88%86%E6%9E%90

#QNAP #QuTShero #ZFS #RAIDZ #Synology #Btrfs #NAS
#資料保護 #儲存系統 #伺服器 #備份 #企業儲存 #ZFS檔案系統 #SynologyNAS #QNAPNAS
#DataProtection #Storage #Server #Backup #EnterpriseStorage #FileSystem #ZFSvsBtrfs
#データ保護 #ストレージ #サーバー #バックアップ #企業向けNAS #ZFSファイルシステム #シノロジー #キューナップ
#台灣 #科技 #自架伺服器 #網路技術 #聯邦宇宙 #Fediverse #技術分享 #オープンソース #news

Lord Caramac the Clueless, KSCLordCaramac@discordian.social
2025-09-01

As soon as I finish reorganising my workspace so I can reach that unused ATX power supply, I'm going to build my new home server/living-room PC for my media collection. This will be my first time using ZFS for a pool with RAID-Z, wish me luck.
#linux #zfs #raid #raidz

2025-08-07
A few weeks after I had to replace a failed drive in my 4x 4TB ZFS raidz, there were 4 read errors on another drive in the raid during my monthly scrub today. The errors could be repaired without the pool getting into degraded mode and the scrub could finish successfully.

But this is another warning sign. I cleared the raidz and lets see how the next scrub goes. But I guess it's time to order another spare drive just in case. Maybe I need to replace *all* drives one after the other now :(

I use 4x Seagate IronWolf 4TB NAS drives, but unfortunately they seem not to be as reliable as I thought.

Because I only have one parity drive there's the danger of permanent data loss when replacing a drive and another drive fails during resilver. I still have an offline backup drive which I send all my snapshots manually once a week. I know it would be safer to send the snapshots automatically every night, so I won't loose too much data if the complete raid fails. I have to think about a solution for something like this.

#ZFS #raid #raidz
cslinuxboycslinuxboy
2025-08-01

Just migrated my LVM-based to based. So much easier setup. I don't think I'm ever going back.

2025-07-11

ZFS on Linux: установка Ubuntu с корнем в ZFS, RAID и шифрованием

Привет! Меня зовут Ваня, я системный администратор Продолжить «просветление»

habr.com/ru/companies/selectel

#selectel #zfs #системное_администрирование #серверное_администрирование #ubuntu #raidz

2025-03-07

@motoridersd

Let us know how it goes... Tomorrow!

LoL

#ZFS #RAIDZ

Benjamin Carr, Ph.D. 👨🏻‍💻🧬BenjaminHCCarr@hachyderm.io
2025-02-15

#TrueNAS 25.04 "Fangtooth" Beta Unifies#Linux SCALE & #FreeBSD CORE Efforts
It was released on Thursday as another step toward unifying #TrueNASCORE derived from FreeBSD and Linux-based #TrueNASSCALE.
TrueNAS 25.04 "Fantooth" is an upgrade for TrueNAS SCALE 24.10 and TrueNAS CORE 13.x. TrueNAS 25.04 beta is powered by the Linux 6.12 LTS kernel, makes use of OpenZFS 2.3, there is much faster #OpenZFS #RAIDZ expansion, new instances support, and a number of other features.
phoronix.com/news/TrueNAS-25.0

Gea-Suan Lingslin@abpe.org
2025-01-21

ZFS 的新功能,增加硬碟進已經存在的 RAIDZ 裡面

整理 Hacker News Daily 上看到的,可以直接把硬碟加到 ZFS 裡已經存在的 RAIDZ 群組裡面了:「ZFS 2.3 released with ZFS raidz expan

blog.gslin.org/archives/2025/0

#Computer #Hardware #Murmuring #Software #disk #drive #field #finite #galois #hard #openzfs #raidz #space #zfs

Benjamin Carr, Ph.D. 👨🏻‍💻🧬BenjaminHCCarr@hachyderm.io
2025-01-14

#OpenZFS 2.3 Released With #RAIDZ Expansion, Fast Dedup, Direct I/O & Other Great Improvements
Supports kernels from #Linux 4.18 up to the latest Linux 6.12 LTS
The really nice new feature is the ability to add new devices to an existing RAIDZ pool to increase the storage capacity without downtime!!
phoronix.com/news/OpenZFS-2.3-

Exotimeexotime
2025-01-14

2.3.0:

- RAIDZ Expansion: Add new devices to an existing pool, increasing storage capacity without downtime.
- Fast Dedup: A major performance upgrade to the original OpenZFS deduplication functionality.
- Direct IO: Allows bypassing the ARC for reads/writes, improving performance in scenarios like NVMe devices where caching may hinder efficiency.
- Long names: Support for file and directory names up to 1023 characters.

Holy crap!

github.com/openzfs/zfs/release

2024-11-10

#ZFS crowd, what are you most excited about for 2.3.0?

Am unable to decide between Direct IO for NVMes or #JSON output for most commands, which is something ... like an #API?

Well, #RAIDZ expansion will have its use for many and long names sounds exciting for large deeply nested music collections.

zfs-2.3.0-rc3

Repository: openzfs/zfs · Tag: zfs-2.3.0-rc3 · Commit: 1a54b13 · Released by: behlendorf

We are excited to announce the third release candidate (RC3) of OpenZFS 2.3.0.
Key Features in OpenZFS 2.3.0 RC3:

    RAIDZ Expansion (#15022): Add new devices to an existing RAIDZ pool, increasing storage capacity without downtime.
    Fast Dedup (#15896): A major performance upgrade to the original OpenZFS deduplication functionality.
    Direct IO (#10018): Allows bypassing the ARC for reads/writes, improving performance in scenarios like NVMe devices where caching may hinder efficiency.
    JSON (#16217): Optional JSON output for the most used commands.
    Long names (#15921): Support for file and directory names up to 1023 characters.
    Bug Fixes: A series of critical bug fixes addressing issues reported in previous versions.
    Supported Platforms:
        Linux kernels 4.18 - 6.11,
        FreeBSD releases 13.3, 14.0, and 14.1.
2024-10-04

[Перевод] Почему мои ZFS-диски так шумят?

У Джонни Кэша есть песня «One piece at a time» 1976 года. В ней рассказывается история об автомеханике, собирающем собственный Cadillac из деталей, которые он в течение 25 лет по одной тырил с производственного конвейера General Motors. Некоторое время назад пользователь Practical ZFS задал обманчиво простой вопрос: «У меня есть пул Proxmox из трёх RAIDz1 vdev (virtual device, виртуальное устройство) по 4 диска. Проблема в том, что во время работы VM все двенадцать дисков минимум раз в секунду издают громкий звук, причём в течение всего дня. Что может быть причиной, и как это устранить?»

habr.com/ru/companies/ruvds/ar

#ruvds_перевод #raidz #хранилища_данных #zfs #снижение_шума_приводов #proxmox

2024-04-24

Fragen, die man sich mitten in der Nacht vor dem Schlafen gehen stellt - #ZFS Datapool für VMs/LXCs bei (noch) fünf (bald sechs) gleich großen SSDs mit #RAID10, #RAIDZ oder #RAIDZ2? Das werde ich wohl aber erst morgen (as in nach dem Schlafen) entscheiden müssen/dürfen. #HomeLab

Borjan Tchakaloffdocbibi@freiburg.social
2024-04-05

I have been thinking about making a simple #NAS which key feature would be to belong to a distributed private mesh (e.g. family and friends). The idea would be to split the data in chunks and store it in multiple places. Kind of a distributed #RAID / #RAIDZ. Then, if a node (storage) fails, data is persisted across other nodes and can be combined again on a fresh node.

I wonder if something already exists off-the-shelf?

A Grantleragrantler
2023-11-21

@zirias Not sure if with just 4 spinning disks is in general a good idea and a performance breaker.

Felix Palmen 📯zirias@techhub.social
2023-11-21

So #FreeBSD 14 is finally announced 🥳 – I already made my mind to not immediately jump on it, because I couldn't see any "killer feature" for me while 13.2 is working just fine. Upgrading to 13.0-RELEASE back then, I ran into several surprising issues. I could find workarounds for all of them, still it was a bit annoying...

But now, looking at the official announcement, this bullet point caught my attention:

"ZFS has been upgraded to OpenZFS release 2.2, providing significant performance improvements."

Performance of my #ZFS pool degrades badly under heavy I/O-load (a parallel poudriere build with lots of smaller ports and lots of ccache hits). The pool is backed by 4 spinning disks in a #raidz configuration.

Could I expect 14.0 to improve performance in that specific scenario? 🤔

Benjamin Carr, Ph.D. 👨🏻‍💻🧬BenjaminHCCarr@hachyderm.io
2023-11-09

#OpenZFS Lands Exciting #RAIDZ Expansion Feature
This RAID-Z expansion functionality has been in the works for years: the #FreeBSD Foundation originally sponsored work on this going back to 2017. There's also been sponsorship by iXsystems to get this work completed along with vStack. This feature allows disks to be added one at a time to a RAID-Z group, expanding its capacity incrementally.
phoronix.com/news/OpenZFS-RAID #ZFS #Linux

2023-10-15

#ZFS on #Linux observations:

1. ZFS on
#Solaris is awesome.
2. My experience with ZFS on Linux has been
terrible.

I'm using a Dell
#R720 configured as a NAS server, with a Dell PERC H310 controller that natively supports JBOD, running Gentoo Linux. The Dell replaced a succession of two SunFire X4540s, both of which were absolutely rock-solid as NAS servers (until their system controller boards failed) and never once had a ZFS error reported except when a drive physically failed. With the R720, I get hot and cold running errors reported. I'm using all Samsung 870 Evo solid-state drives, in two #RAIDZ arrays, one of eight drives and one of six. I am at this very moment in the process of cleaning up the arrays ... again.

What I can't figure out is why.
— Is ZFS on Linux
really that terrible?
— Does ZFS on Linux just somehow not work well with SSDs?
— Does the PERC controller in the R720 not work well with SSDs?

I wasn't originally running SSDs in this array; my first attempt was using 2.5" spinning rust drives. I rapidly discovered two things:
1. As far as I can determine, all 2.5" mechanical hard drives 2TB or larger on the market are SMR drives;
2. OH MY GOD, SMR DRIVES (especially, I am told, in ZFS) ARE UTTERLY FUCKING HORRIBLE except on WORM (read once, write many) applications in which
you don't really care how slow the original write is. RAIDZ write performance on the Dell on brand new 2.5" SMR drives was four to six times slower than RAIDZ write performance on the X4540 with older and slower CMR drives on older and slower SCSI/SAS controllers. Despite newer, "faster" drives on a newer, faster controller, the SMR array was utterly unusable.

Now, I'm not experiencing any problems with SSDs in any of my other systems, Windows or Linux, INCLUDING the R720,
except with ZFS. The boot drives on the R720 are an mdraid mirror formatted XFS and have never thrown a single error.

So this is really leading me to wonder a crucial question:

Is there something I don't know about
#ZFSonLinux that causes it to not work well with #SSD drives? Do I need to just forget about running ZFS on my NAS and let the PERC controller create hardware RAID5 volumes?

(And if anyone wonders "why don't you just run a commercial NAS appliance?", well, I tried that route. I tried one of the very latest generation QNAP servers that run ZFS storage on a Linux OS. Oh my god, I can't even begin to speak to how horribly bastardized it was. QNAP may well be a good NAS choice if you
only care about Windows and SMB and never ever want to look under the hood or try to accomplish anything except through the web front-end, and don't already have an existing backup solution that you want to continue using.)

Client Info

Server: https://mastodon.social
Version: 2025.07
Repository: https://github.com/cyevgeniy/lmst