lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d00be167-741a-4569-a51e-38b36325826e@huaweicloud.com>
Date: Tue, 6 Jan 2026 10:44:38 +0800
From: Zheng Qixing <zhengqixing@...weicloud.com>
To: Roman Mamedov <rm@...anrm.net>
Cc: song@...nel.org, yukuai@...as.com, linux-raid@...r.kernel.org,
 linux-kernel@...r.kernel.org, yi.zhang@...wei.com, yangerkun@...wei.com,
 houtao1@...wei.com, linan122@...artners.com, zhengqixing@...wei.com
Subject: Re: [RFC PATCH 0/5] md/raid1: introduce a new sync action to repair
 badblocks

Hi,

在 2025/12/31 19:11, Roman Mamedov 写道:
> On Wed, 31 Dec 2025 15:09:47 +0800
> Zheng Qixing <zhengqixing@...weicloud.com> wrote:
>
>> From: Zheng Qixing <zhengqixing@...wei.com>
>>
>> In RAID1, some sectors may be marked as bad blocks due to I/O errors.
>> In certain scenarios, these bad blocks might not be permanent, and
>> issuing I/Os again could succeed.
>>
>> To address this situation, a new sync action ('rectify') introduced
>> into RAID1 , allowing users to actively trigger the repair of existing
>> bad blocks and clear it in sys bad_blocks.
>>
>> When echo rectify into /sys/block/md*/md/sync_action, a healthy disk is
>> selected from the array to read data and then writes it to the disk where
>> the bad block is located. If the write request succeeds, the bad block
>> record can be cleared.
> Could you also check here that it reads back successfully, and only then clear?
>
> Otherwise there are cases when the block won't read even after rewriting it.

Thanks for your suggestions.

I'm a bit worried that reading the data again before clearing the bad 
blocks might

affect the performance of the bad block repair process.


> Side note, on some hardware it might be necessary to rewrite a larger area
> around the problematic block, to finally trigger a remap. Not 512B, but at
> least the native sector size, which is often 4K.


Are you referring to the case where we have logical 512B sectors but 
physical 4K sectors?

I'm not entirely clear on one aspect:

Can a physical 4K block have partial recovery (e.g., one 512B sector 
succeeds while the other 7 fail)?


Thanks,

Qixing



Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ