[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <cb9d65fe-47b9-4539-a8d0-9863e8ebf49f@kernel.dk>
Date: Fri, 18 Oct 2024 08:21:17 -0600
From: Jens Axboe <axboe@...nel.dk>
To: Tero Kristo <tero.kristo@...ux.intel.com>
Cc: hch@....de, linux-block@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCHv2 2/2] blk-mq: add support for CPU latency limits
On 10/18/24 1:30 AM, Tero Kristo wrote:
> @@ -2700,11 +2701,62 @@ static void blk_mq_plug_issue_direct(struct blk_plug *plug)
> static void __blk_mq_flush_plug_list(struct request_queue *q,
> struct blk_plug *plug)
> {
> + struct request *req, *next;
> + struct blk_mq_hw_ctx *hctx;
> + int cpu;
> +
> if (blk_queue_quiesced(q))
> return;
> +
> + rq_list_for_each_safe(&plug->mq_list, req, next) {
> + hctx = req->mq_hctx;
> +
> + if (next && next->mq_hctx == hctx)
> + continue;
> +
> + if (q->disk->cpu_lat_limit < 0)
> + continue;
> +
> + hctx->last_active = jiffies + msecs_to_jiffies(q->disk->cpu_lat_timeout);
> +
> + if (!hctx->cpu_lat_limit_active) {
> + hctx->cpu_lat_limit_active = true;
> + for_each_cpu(cpu, hctx->cpumask) {
> + struct dev_pm_qos_request *qos;
> +
> + qos = per_cpu_ptr(hctx->cpu_lat_qos, cpu);
> + dev_pm_qos_add_request(get_cpu_device(cpu), qos,
> + DEV_PM_QOS_RESUME_LATENCY,
> + q->disk->cpu_lat_limit);
> + }
> + schedule_delayed_work(&hctx->cpu_latency_work,
> + msecs_to_jiffies(q->disk->cpu_lat_timeout));
> + }
> + }
> +
This is, quite literally, and insane amount of cycles to add to the hot
issue path. You're iterating each request in the list, and then each CPU
in the mask of the hardware context for each request.
This just won't fly, not at all. Like the previous feedback, please
figure out a way to make this cheaper. This means don't iterate a bunch
of stuff.
Outside of that, lots of styling issues here too, but none of that
really matters until the base mechanism is at least half way sane.
--
Jens Axboe
Powered by blists - more mailing lists