[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20211208145038.64738-1-wanghonglei@didichuxing.com>
Date: Wed, 8 Dec 2021 22:50:38 +0800
From: Honglei Wang <wanghonglei@...ichuxing.com>
To: Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>,
"Mel Gorman" <mgorman@...e.de>,
Daniel Bristot de Oliveira <bristot@...hat.com>,
<linux-kernel@...r.kernel.org>
CC: Huaixin Chang <changhuaixin@...ux.alibaba.com>,
Honglei Wang <jameshongleiwang@....com>
Subject: [PATCH v2 2/3] sched/fair: prevent cpu burst too many periods
Tasks might get more cpu than quota in persistent periods due to the
cpu burst introduced by commit f4183717b370 ("sched/fair: Introduce the
burstable CFS controller"). For example, one task group whose quota is
100ms per period and can get 100ms burst, and its avg utilization is
around 105ms per period. Once this group gets a free period which
leaves enough runtime, it has a chance to get computting power more
than its quota for 10 periods or more in common bandwidth configuration
(say, 100ms as period). It means tasks can 'steal' the bursted power to
do daily jobs because all tasks could be scheduled out or sleep to help
the group get free periods.
I believe the purpose of cpu burst is to help handling bursty worklod.
But if one task group can get computting power more than its quota for
persistent periods even there is no bursty workload, it's kinda broke.
This patch limits the burst to 2 periods so that it won't break the
quota limit for long. Permitting 2 periods can help on the scenario that
periods refresh lands in the middle of a burst workload. With this, we
can give task group more cpu burst power to handle the real burst
workload and don't worry about the 'stealing'.
Signed-off-by: Honglei Wang <wanghonglei@...ichuxing.com>
---
kernel/sched/fair.c | 13 ++++++++++---
kernel/sched/sched.h | 1 +
2 files changed, 11 insertions(+), 3 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 2cd626c22912..4e04cb4269ba 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4645,14 +4645,21 @@ void __refill_cfs_bandwidth_runtime(struct cfs_bandwidth *cfs_b)
return;
}
- cfs_b->runtime += cfs_b->quota;
- runtime = cfs_b->runtime_snap - cfs_b->runtime;
+ runtime = cfs_b->runtime_snap - cfs_b->runtime - cfs_b->quota;
if (runtime > 0) {
cfs_b->burst_time += runtime;
cfs_b->nr_burst++;
+ cfs_b->burst_periods++;
+ }
+
+ if (cfs_b->burst_periods > 1) {
+ cfs_b->runtime = cfs_b->quota;
+ cfs_b->burst_periods = 0;
+ } else {
+ cfs_b->runtime += cfs_b->quota;
+ cfs_b->runtime = min(cfs_b->runtime, cfs_b->quota + cfs_b->burst);
}
- cfs_b->runtime = min(cfs_b->runtime, cfs_b->quota + cfs_b->burst);
cfs_b->runtime_snap = cfs_b->runtime;
}
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 0e66749486e7..f42280bca3b2 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -370,6 +370,7 @@ struct cfs_bandwidth {
u64 burst;
u64 runtime_snap;
s64 hierarchical_quota;
+ u8 burst_periods;
u8 idle;
u8 period_active;
--
2.14.1
Powered by blists - more mailing lists