[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <08b80caa-d726-b9f3-7ce0-f486b8080ec5@deltatee.com>
Date: Wed, 8 Jun 2022 12:21:46 -0600
From: Logan Gunthorpe <logang@...tatee.com>
To: Song Liu <song@...nel.org>
Cc: open list <linux-kernel@...r.kernel.org>,
linux-raid <linux-raid@...r.kernel.org>,
Christoph Hellwig <hch@...radead.org>,
Donald Buczek <buczek@...gen.mpg.de>,
Guoqing Jiang <guoqing.jiang@...ux.dev>,
Xiao Ni <xni@...hat.com>, Stephen Bates <sbates@...thlin.com>,
Martin Oliveira <Martin.Oliveira@...eticom.com>,
David Sloan <David.Sloan@...eticom.com>,
Christoph Hellwig <hch@....de>
Subject: Re: [PATCH v4 04/11] md/raid5: Ensure array is suspended for calls to
log_exit()
On 2022-06-08 11:59, Song Liu wrote:
> On Wed, Jun 8, 2022 at 9:28 AM Logan Gunthorpe <logang@...tatee.com> wrote:
>>
>> The raid5-cache code relies on there being no IO in flight when
>> log_exit() is called. There are two places where this is not
>> guaranteed so add mddev_suspend() and mddev_resume() calls to these
>> sites.
>>
>> The site in raid5_remove_disk() has a comment saying that it is
>> called in raid5d and thus cannot wait for pending writes; however that
>> does not appear to be correct anymore (if it ever was) as
>> raid5_remove_disk() is called from hot_remove_disk() which only
>> appears to be called in the md_ioctl(). Thus, the comment is removed,
>> as well as the racy check and replaced with calls to suspend/resume.
>>
>> The site in raid5_change_consistency_policy() is in the error path,
>> and another similar call site already has suspend/resume calls just
>> below it; so it should be equally safe to make that change here.
>>
>> Signed-off-by: Logan Gunthorpe <logang@...tatee.com>
>> Reviewed-by: Christoph Hellwig <hch@....de>
>> ---
>> drivers/md/raid5.c | 18 ++++++------------
>> 1 file changed, 6 insertions(+), 12 deletions(-)
>>
>> diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
>> index 5d09256d7f81..3ad37dd4c5cd 100644
>> --- a/drivers/md/raid5.c
>> +++ b/drivers/md/raid5.c
>> @@ -7938,18 +7938,9 @@ static int raid5_remove_disk(struct mddev *mddev, struct md_rdev *rdev)
>>
>> print_raid5_conf(conf);
>> if (test_bit(Journal, &rdev->flags) && conf->log) {
>> - /*
>> - * we can't wait pending write here, as this is called in
>> - * raid5d, wait will deadlock.
>> - * neilb: there is no locking about new writes here,
>> - * so this cannot be safe.
>> - */
>> - if (atomic_read(&conf->active_stripes) ||
>> - atomic_read(&conf->r5c_cached_full_stripes) ||
>> - atomic_read(&conf->r5c_cached_partial_stripes)) {
>> - return -EBUSY;
>> - }
>> + mddev_suspend(mddev);
>
> Unfortunately, the comment about deadlock is still true, and we cannot call
> mddev_suspend here. To trigger it:
Ah, yes. What a tangle. I think we can just drop this patch. Now that we
are removing RCU it isn't actually necessary to fix the bug I was
seeing. It's still probably broken as the comment notes though.
Thanks,
Logan
Powered by blists - more mailing lists