[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aO4GPKKpLbj7kMoz@fedora>
Date: Tue, 14 Oct 2025 16:13:48 +0800
From: Ming Lei <ming.lei@...hat.com>
To: Yu Kuai <yukuai3@...wei.com>
Cc: nilay@...ux.ibm.com, tj@...nel.org, josef@...icpanda.com,
axboe@...nel.dk, cgroups@...r.kernel.org,
linux-block@...r.kernel.org, linux-kernel@...r.kernel.org,
yukuai1@...weicloud.com, yi.zhang@...wei.com, yangerkun@...wei.com,
johnny.chenyi@...wei.com
Subject: Re: [PATCH 3/4] blk-rq-qos: fix possible deadlock
On Tue, Oct 14, 2025 at 10:21:48AM +0800, Yu Kuai wrote:
> Currently rq-qos debugfs entries is created from rq_qos_add(), while
> rq_qos_add() requires queue to be freezed. This can deadlock because
> creating new entries can trigger fs reclaim.
>
> Fix this problem by delaying creating rq-qos debugfs entries until
> it's initialization is complete.
>
> - For wbt, it can be initialized by default of by blk-sysfs, fix it by
> calling blk_mq_debugfs_register_rq_qos() after wbt_init;
> - For other policies, they can only be initialized by blkg configuration,
> fix it by calling blk_mq_debugfs_register_rq_qos() from
> blkg_conf_end();
>
> Signed-off-by: Yu Kuai <yukuai3@...wei.com>
> ---
> block/blk-cgroup.c | 6 ++++++
> block/blk-rq-qos.c | 7 -------
> block/blk-sysfs.c | 4 ++++
> block/blk-wbt.c | 7 ++++++-
> 4 files changed, 16 insertions(+), 8 deletions(-)
>
> diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
> index d93654334854..e4ccabf132c0 100644
> --- a/block/blk-cgroup.c
> +++ b/block/blk-cgroup.c
> @@ -33,6 +33,7 @@
> #include "blk-cgroup.h"
> #include "blk-ioprio.h"
> #include "blk-throttle.h"
> +#include "blk-mq-debugfs.h"
>
> static void __blkcg_rstat_flush(struct blkcg *blkcg, int cpu);
>
> @@ -746,6 +747,11 @@ void blkg_conf_end(struct blkg_conf_ctx *ctx)
> mutex_unlock(&q->elevator_lock);
> blk_mq_unfreeze_queue(q, ctx->memflags);
> blkdev_put_no_open(ctx->bdev);
> +
> + mutex_lock(&q->debugfs_mutex);
> + blk_mq_debugfs_register_rq_qos(q);
> + mutex_unlock(&q->debugfs_mutex);
> +
> }
> EXPORT_SYMBOL_GPL(blkg_conf_end);
>
> diff --git a/block/blk-rq-qos.c b/block/blk-rq-qos.c
> index 654478dfbc20..d7ce99ce2e80 100644
> --- a/block/blk-rq-qos.c
> +++ b/block/blk-rq-qos.c
> @@ -347,13 +347,6 @@ int rq_qos_add(struct rq_qos *rqos, struct gendisk *disk, enum rq_qos_id id,
> blk_queue_flag_set(QUEUE_FLAG_QOS_ENABLED, q);
>
> blk_mq_unfreeze_queue(q, memflags);
> -
> - if (rqos->ops->debugfs_attrs) {
> - mutex_lock(&q->debugfs_mutex);
> - blk_mq_debugfs_register_rqos(rqos);
> - mutex_unlock(&q->debugfs_mutex);
> - }
> -
> return 0;
> ebusy:
> blk_mq_unfreeze_queue(q, memflags);
> diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
> index 76c47fe9b8d6..52bb4db25cf5 100644
> --- a/block/blk-sysfs.c
> +++ b/block/blk-sysfs.c
> @@ -688,6 +688,10 @@ static ssize_t queue_wb_lat_store(struct gendisk *disk, const char *page,
> mutex_unlock(&disk->rqos_state_mutex);
>
> blk_mq_unquiesce_queue(q);
> +
> + mutex_lock(&q->debugfs_mutex);
> + blk_mq_debugfs_register_rq_qos(q);
> + mutex_unlock(&q->debugfs_mutex);
> out:
> blk_mq_unfreeze_queue(q, memflags);
>
> diff --git a/block/blk-wbt.c b/block/blk-wbt.c
> index eb8037bae0bd..a120b5ba54db 100644
> --- a/block/blk-wbt.c
> +++ b/block/blk-wbt.c
> @@ -724,8 +724,13 @@ void wbt_enable_default(struct gendisk *disk)
> if (!blk_queue_registered(q))
> return;
>
> - if (queue_is_mq(q) && enable)
> + if (queue_is_mq(q) && enable) {
> wbt_init(disk);
> +
> + mutex_lock(&q->debugfs_mutex);
> + blk_mq_debugfs_register_rq_qos(q);
> + mutex_unlock(&q->debugfs_mutex);
> + }
->debugfs_mutex only may be not enough, because blk_mq_debugfs_register_rq_qos()
has to traverse rq_qos single list list, you may have to grab q->rq_qos_mutex
for protect the list.
Thanks,
Ming
Powered by blists - more mailing lists