[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y6HcWiJbaWjN3jlt@slm.duckdns.org>
Date: Tue, 20 Dec 2022 06:01:30 -1000
From: Tejun Heo <tj@...nel.org>
To: Yu Kuai <yukuai1@...weicloud.com>
Cc: hch@...radead.org, josef@...icpanda.com, axboe@...nel.dk,
cgroups@...r.kernel.org, linux-block@...r.kernel.org,
linux-kernel@...r.kernel.org, yi.zhang@...wei.com,
"yukuai (C)" <yukuai3@...wei.com>
Subject: Re: [PATCH -next 0/4] blk-cgroup: synchronize del_gendisk() with
configuring cgroup policy
Hello,
On Tue, Dec 20, 2022 at 05:19:12PM +0800, Yu Kuai wrote:
> Yes, that sounds good. BTW, queue_lock is also used to protect
> pd_alloc_fn/pd_init_fn,and we found that blkcg_activate_policy() is
> problematic:
>
> blkcg_activate_policy
> spin_lock_irq(&q->queue_lock);
> list_for_each_entry_reverse(blkg, &q->blkg_list
> pd_alloc_fn(GFP_NOWAIT | __GFP_NOWARN,...) -> failed
>
> spin_unlock_irq(&q->queue_lock);
> // release queue_lock here is problematic, this will cause
> pd_offline_fn called without pd_init_fn.
> pd_alloc_fn(__GFP_NOWARN,...)
So, if a blkg is destroyed while a policy is being activated, right?
> If we are using a mutex to protect rq_qos ops, it seems the right thing
> to do do also using the mutex to protect blkcg_policy ops, and this
> problem can be fixed because mutex can be held to alloc memroy with
> GFP_KERNEL. What do you think?
One worry is that switching to mutex can be more headache due to destroy
path synchronization. Another approach would be using a per-blkg flag to
track whether a blkg has been initialized.
Thanks.
--
tejun
Powered by blists - more mailing lists