[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <c608d2fd-15aa-4d61-92d8-1e9e79d10891@fnnas.com>
Date: Sat, 8 Nov 2025 18:15:24 +0800
From: "Yu Kuai" <yukuai@...as.com>
To: <linan666@...weicloud.com>, <song@...nel.org>, <neil@...wn.name>,
<namhyung@...il.com>
Cc: <linux-raid@...r.kernel.org>, <linux-kernel@...r.kernel.org>,
<xni@...hat.com>, <k@...l.me>, <yangerkun@...wei.com>,
<yi.zhang@...wei.com>, <yukuai@...as.com>
Subject: Re: [PATCH v2 05/11] md: mark rdev Faulty when badblocks setting fails
在 2025/11/6 19:59, linan666@...weicloud.com 写道:
> Currently when sync read fails and badblocks set fails (exceeding
> 512 limit), rdev isn't immediately marked Faulty. Instead
> 'recovery_disabled' is set and non-In_sync rdevs are removed later.
> This preserves array availability if bad regions aren't read, but bad
> sectors might be read by users before rdev removal. This occurs due
> to incorrect resync/recovery_offset updates that include these bad
> sectors.
>
> When badblocks exceed 512, keeping the disk provides little benefit
> while adding complexity. Prompt disk replacement is more important.
> Therefore when badblocks set fails, directly call md_error to mark rdev
> Faulty immediately, preventing potential data access issues.
>
> After this change, cleanup of offset update logic and 'recovery_disabled'
> handling will follow.
>
> Fixes: 5e5702898e93 ("md/raid10: Handle read errors during recovery better.")
> Fixes: 3a9f28a5117e ("md/raid1: improve handling of read failure during recovery.")
> Signed-off-by: Li Nan<linan122@...wei.com>
> ---
> drivers/md/md.c | 8 +++++++-
> drivers/md/raid1.c | 20 +++++++++-----------
> drivers/md/raid10.c | 35 +++++++++++++++--------------------
> drivers/md/raid5.c | 22 +++++++++-------------
> 4 files changed, 40 insertions(+), 45 deletions(-)
LGTM
Reviewed-by: Yu Kuai <yukuai@...as.com>
Powered by blists - more mailing lists