[<prev] [next>] [day] [month] [year] [list]
Message-ID: <CAPnZJGB5izL9LzLVFHOGt5rNg+V0ZvVghebXS41U7HiGwXoEUg@mail.gmail.com>
Date: Mon, 15 May 2023 14:15:48 +0300
From: Askar Safin <safinaskar@...il.com>
To: Kent Overstreet <kent.overstreet@...ux.dev>
Cc: linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org,
linux-block@...r.kernel.org, linux-mm@...ck.org,
linux-bcachefs@...r.kernel.org
Subject: Re: [PATCH 00/32] bcachefs - a new COW filesystem
Kent, please, make sure you dealt with problems specific to another
fs: btrfs: https://arstechnica.com/gadgets/2021/09/examining-btrfs-linuxs-perpetually-half-finished-filesystem/
. In particular, I dislike this btrfs problems, mentioned in the
article:
- "Yes, you read that correctly—you mount the array using the name of
any given disk in the array. No, it doesn't matter which one"
- "Even though our array is technically "redundant," it refuses to
mount with /dev/vdc missing... In the worst-case scenario—a root
filesystem that itself is stored "redundantly" on btrfs-raid1 or
btrfs-raid10—the entire system refuses to boot... If you're thinking,
"Well, the obvious step here is just to always mount degraded," the
btrfs devs would like to have a word with you... If you lose a drive
from a conventional RAID array, or an mdraid array, or a ZFS zpool,
that array keeps on trucking without needing any special flags to
mount it. If you then add the failed drive back to the array, your
RAID manager will similarly automatically begin "resilvering" or
"rebuilding" the array... That, unfortunately, is not the case with
btrfs-native RAID"
I suggest reading the article in full, at least from section "Btrfs
RAID array management is a mess" till the end.
Please, ensure that bcachefs has no these problems! These problems
scary me away from btrfs.
Please, CC me when answering
--
Askar Safin
Powered by blists - more mailing lists