[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220609073450.98975-2-zhouchengming@bytedance.com>
Date: Thu, 9 Jun 2022 15:34:50 +0800
From: Chengming Zhou <zhouchengming@...edance.com>
To: tj@...nel.org, axboe@...nel.dk
Cc: linux-block@...r.kernel.org, linux-kernel@...r.kernel.org,
Chengming Zhou <zhouchengming@...edance.com>
Subject: [PATCH 2/2] blk-iocost: fix vtime loss calculation when iocg deactivate
The commit ac33e91e2dac ("blk-iocost: implement vtime loss compensation")
will accumulate vtime loss of iocgs on the period, to compensate
the vtime loss we throw away, which is good for device utilization.
But the vtime loss calculation of iocg is wrong because of different
hweight_gen when having some iocgs deactivated.
ioc_check_iocgs()
...
} else if (iocg_is_idle(iocg)) {
ioc->vtime_err -= div64_u64(excess * old_hwi, --> old_hwi_1
WEIGHT_ONE);
}
commit_weights(ioc); --> hweight_gen increase
hweight_after_donation()
...
ioc->vtime_err -= div64_u64(excess * old_hwi, --> old_hwi_2
WEIGHT_ONE);
The old_hwi_2 of active iocgs is in fact not of the same hweight_gen
as the old_hwi_1 of idle iocgs. After idle iocgs deactivate and
hweight_gen increase, the old_hwi_2 become larger than it should be,
which cause the calculated vtime_err more than it should be.
I found this problem by noticing some abnormal vtime loss compensation
when having some cgroups with intermittent IO.
Since we already have recorded the usage_delta_us of each iocg, and
usage_us_sum is the sum of them, so the vtime loss calculation of
the period is straightforward and more accurate.
Signed-off-by: Chengming Zhou <zhouchengming@...edance.com>
---
block/blk-iocost.c | 21 ++++-----------------
1 file changed, 4 insertions(+), 17 deletions(-)
diff --git a/block/blk-iocost.c b/block/blk-iocost.c
index 3cda08224d51..6c55c69d4aad 100644
--- a/block/blk-iocost.c
+++ b/block/blk-iocost.c
@@ -1730,7 +1730,6 @@ static u32 hweight_after_donation(struct ioc_gq *iocg, u32 old_hwi, u32 hwm,
atomic64_add(excess, &iocg->vtime);
atomic64_add(excess, &iocg->done_vtime);
vtime += excess;
- ioc->vtime_err -= div64_u64(excess * old_hwi, WEIGHT_ONE);
}
/*
@@ -2168,22 +2167,6 @@ static int ioc_check_iocgs(struct ioc *ioc, struct ioc_now *now)
} else if (iocg_is_idle(iocg)) {
/* no waiter and idle, deactivate */
u64 vtime = atomic64_read(&iocg->vtime);
- s64 excess;
-
- /*
- * @iocg has been inactive for a full duration and will
- * have a high budget. Account anything above target as
- * error and throw away. On reactivation, it'll start
- * with the target budget.
- */
- excess = now->vnow - vtime - ioc->margins.target;
- if (excess > 0) {
- u32 old_hwi;
-
- current_hweight(iocg, NULL, &old_hwi);
- ioc->vtime_err -= div64_u64(excess * old_hwi,
- WEIGHT_ONE);
- }
TRACE_IOCG_PATH(iocg_idle, iocg, now,
atomic64_read(&iocg->active_period),
@@ -2348,6 +2331,10 @@ static void ioc_timer_fn(struct timer_list *timer)
list_for_each_entry_safe(iocg, tiocg, &surpluses, surplus_list)
list_del_init(&iocg->surplus_list);
+ /* calculate vtime loss in this period */
+ if (period_vtime > usage_us_sum * ioc->vtime_base_rate)
+ ioc->vtime_err -= period_vtime - usage_us_sum * ioc->vtime_base_rate;
+
/*
* If q is getting clogged or we're missing too much, we're issuing
* too much IO and should lower vtime rate. If we're not missing
--
2.36.1
Powered by blists - more mailing lists