[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0de7efeb-6d4a-2fa5-ed14-e2c0bec0257b@huaweicloud.com>
Date: Mon, 12 May 2025 16:23:55 +0800
From: Yu Kuai <yukuai1@...weicloud.com>
To: Christoph Hellwig <hch@....de>, Yu Kuai <yukuai1@...weicloud.com>
Cc: xni@...hat.com, colyli@...nel.org, agk@...hat.com, snitzer@...nel.org,
mpatocka@...hat.com, song@...nel.org, linux-kernel@...r.kernel.org,
dm-devel@...ts.linux.dev, linux-raid@...r.kernel.org, yi.zhang@...wei.com,
yangerkun@...wei.com, johnny.chenyi@...wei.com,
"yukuai (C)" <yukuai3@...wei.com>
Subject: Re: [PATCH RFC md-6.16 v3 15/19] md/md-llbitmap: implement APIs to
dirty bits and clear bits
Hi,
在 2025/05/12 13:17, Christoph Hellwig 写道:
> On Mon, May 12, 2025 at 09:19:23AM +0800, Yu Kuai wrote:
>> +static void llbitmap_unplug(struct mddev *mddev, bool sync)
>> +{
>> + DECLARE_COMPLETION_ONSTACK(done);
>> + struct llbitmap *llbitmap = mddev->bitmap;
>> + struct llbitmap_unplug_work unplug_work = {
>> + .llbitmap = llbitmap,
>> + .done = &done,
>> + };
>> +
>> + if (!llbitmap_dirty(llbitmap))
>> + return;
>> +
>> + INIT_WORK_ONSTACK(&unplug_work.work, llbitmap_unplug_fn);
>> + queue_work(md_llbitmap_unplug_wq, &unplug_work.work);
>> + wait_for_completion(&done);
>> + destroy_work_on_stack(&unplug_work.work);
>
> Why is this deferring the work to a workqueue, but then synchronously
> waits on it?
This is the same as old bitmap, by the fact that issue new IO and wait
for such IO to be done from submit_bio() context will deadlock.
1) bitmap bio must be done before this bio can be issued;
2) bitmap bio will be added to current->bio_list, and wait for this bio
to be issued;
Do you have a better sulution to this problem?
Thanks,
Kuai
>
> .
>
Powered by blists - more mailing lists