[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0dab5bd2-4f19-0b04-fa8c-6ed68b70c20e@acm.org>
Date: Fri, 8 Apr 2022 07:24:47 -0700
From: Bart Van Assche <bvanassche@....org>
To: Yu Kuai <yukuai3@...wei.com>, axboe@...nel.dk,
andriy.shevchenko@...ux.intel.com, john.garry@...wei.com,
ming.lei@...hat.com
Cc: linux-block@...r.kernel.org, linux-kernel@...r.kernel.org,
yi.zhang@...wei.com
Subject: Re: [PATCH -next RFC v2 4/8] blk-mq: don't preempt tag under heavy
load
On 4/8/22 00:39, Yu Kuai wrote:
> The idle way to disable tag preemption is to track how many tags are
idle -> ideal?
> available, and wait directly in blk_mq_get_tag() if free tags are
> very little. However, this is out of reality because fast path is
> affected.
>
> As 'ws_active' is only updated in slow path, this patch disable tag
> preemption if 'ws_active' is greater than 8, which means there are many
> threads waiting for tags already.
>
> Once tag preemption is disabled, there is a situation that can cause
> performance degration(or io hung in extreme scenarios): the waitqueue
degration -> degradation?
> diff --git a/block/blk-mq.h b/block/blk-mq.h
> index 2615bd58bad3..b49b20e11350 100644
> --- a/block/blk-mq.h
> +++ b/block/blk-mq.h
> @@ -156,6 +156,7 @@ struct blk_mq_alloc_data {
>
> /* allocate multiple requests/tags in one go */
> unsigned int nr_tags;
> + bool preemption;
> struct request **cached_rq;
>
Please change "preemption" into "preempt".
Thanks,
Bart.
Powered by blists - more mailing lists