[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20221018111240.22612-3-shikemeng@huawei.com>
Date: Tue, 18 Oct 2022 19:12:39 +0800
From: Kemeng Shi <shikemeng@...wei.com>
To: <tj@...nel.org>, <josef@...icpanda.com>, <axboe@...nel.dk>
CC: <cgroups@...r.kernel.org>, <linux-block@...r.kernel.org>,
<linux-kernel@...r.kernel.org>, <shikemeng@...wei.com>
Subject: [PATCH v2 2/3] block: Correct comment for scale_cookie_change
Default queue depth of iolatency_grp is unlimited, so we scale down
quickly(once by half) in scale_cookie_change. Remove the "subtract
1/16th" part which is not the truth and add the actual way we
scale down.
Signed-off-by: Kemeng Shi <shikemeng@...wei.com>
---
block/blk-iolatency.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/block/blk-iolatency.c b/block/blk-iolatency.c
index b24d7b788ba3..2c574f98c8d1 100644
--- a/block/blk-iolatency.c
+++ b/block/blk-iolatency.c
@@ -364,9 +364,11 @@ static void scale_cookie_change(struct blk_iolatency *blkiolat,
}
/*
- * Change the queue depth of the iolatency_grp. We add/subtract 1/16th of the
+ * Change the queue depth of the iolatency_grp. We add 1/16th of the
* queue depth at a time so we don't get wild swings and hopefully dial in to
- * fairer distribution of the overall queue depth.
+ * fairer distribution of the overall queue depth. We halve the queue depth
+ * at a time so we can scale down queue depth quickly from default unlimited
+ * to target.
*/
static void scale_change(struct iolatency_grp *iolat, bool up)
{
--
2.30.0
Powered by blists - more mailing lists