[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <gq45vl55n2gucipjc5jk5e5kp7ups3nw672ua6nvksooycezv5@lfr62hy5br4f>
Date: Tue, 3 Feb 2026 10:06:38 +0100
From: Michal Koutný <mkoutny@...e.com>
To: Yu Kuai <yukuai@...as.com>
Cc: tj@...nel.org, josef@...icpanda.com, axboe@...nel.dk,
cgroups@...r.kernel.org, linux-block@...r.kernel.org, linux-kernel@...r.kernel.org,
zhengqixing@...wei.com, hch@...radead.org, ming.lei@...hat.com, nilay@...ux.ibm.com
Subject: Re: [PATCH v2 6/7] blk-cgroup: allocate pds before freezing queue in
blkcg_activate_policy()
On Tue, Feb 03, 2026 at 04:06:01PM +0800, Yu Kuai <yukuai@...as.com> wrote:
> Some policies like iocost and iolatency perform percpu allocation in
> pd_alloc_fn(). Percpu allocation with queue frozen can cause deadlock
> because percpu memory reclaim may issue IO.
>
> Now that q->blkg_list is protected by blkcg_mutex,
With this ^^^
...
> restructure
> blkcg_activate_policy() to allocate all pds before freezing the queue:
> 1. Allocate all pds with GFP_KERNEL before freezing the queue
> 2. Freeze the queue
> 3. Initialize and online all pds
>
> Note: Future work is to remove all queue freezing before
> blkcg_activate_policy() to fix the deadlocks thoroughly.
>
> Signed-off-by: Yu Kuai <yukuai@...as.com>
> ---
> block/blk-cgroup.c | 90 +++++++++++++++++-----------------------------
> 1 file changed, 32 insertions(+), 58 deletions(-)
>
> diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
> index 0206050f81ea..7fcb216917d0 100644
> --- a/block/blk-cgroup.c
> +++ b/block/blk-cgroup.c
> @@ -1606,8 +1606,7 @@ static void blkcg_policy_teardown_pds(struct request_queue *q,
> int blkcg_activate_policy(struct gendisk *disk, const struct blkcg_policy *pol)
> {
> struct request_queue *q = disk->queue;
> - struct blkg_policy_data *pd_prealloc = NULL;
> - struct blkcg_gq *blkg, *pinned_blkg = NULL;
> + struct blkcg_gq *blkg;
> unsigned int memflags;
> int ret;
>
> @@ -1622,90 +1621,65 @@ int blkcg_activate_policy(struct gendisk *disk, const struct blkcg_policy *pol)
...
> + /* Now freeze queue and initialize/online all pds */
> + if (queue_is_mq(q))
> + memflags = blk_mq_freeze_queue(q);
> +
> + spin_lock_irq(&q->queue_lock);
> + list_for_each_entry_reverse(blkg, &q->blkg_list, q_node) {
> + struct blkg_policy_data *pd = blkg->pd[pol->plid];
> +
> + /* Skip dying blkg */
> + if (hlist_unhashed(&blkg->blkcg_node))
> + continue;
> +
> + spin_lock(&blkg->blkcg->lock);
> if (pol->pd_init_fn)
> pol->pd_init_fn(pd);
> -
> if (pol->pd_online_fn)
> pol->pd_online_fn(pd);
> pd->online = true;
> -
> spin_unlock(&blkg->blkcg->lock);
> }
>
> __set_bit(pol->plid, q->blkcg_pols);
> - ret = 0;
> -
> spin_unlock_irq(&q->queue_lock);
> -out:
> - mutex_unlock(&q->blkcg_mutex);
> +
> if (queue_is_mq(q))
> blk_mq_unfreeze_queue(q, memflags);
> - if (pinned_blkg)
> - blkg_put(pinned_blkg);
> - if (pd_prealloc)
> - pol->pd_free_fn(pd_prealloc);
> - return ret;
> + mutex_unlock(&q->blkcg_mutex);
> + return 0;
Why is q->queue_lock still needed here?
Thanks,
Michal
Download attachment "signature.asc" of type "application/pgp-signature" (266 bytes)
Powered by blists - more mailing lists