[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230328110354.562078801@infradead.org>
Date: Tue, 28 Mar 2023 11:26:36 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: mingo@...nel.org, vincent.guittot@...aro.org
Cc: linux-kernel@...r.kernel.org, peterz@...radead.org,
juri.lelli@...hat.com, dietmar.eggemann@....com,
rostedt@...dmis.org, bsegall@...gle.com, mgorman@...e.de,
bristot@...hat.com, corbet@....net, qyousef@...alina.io,
chris.hyser@...cle.com, patrick.bellasi@...bug.net, pjt@...gle.com,
pavel@....cz, qperret@...gle.com, tim.c.chen@...ux.intel.com,
joshdon@...gle.com, timj@....org, kprateek.nayak@....com,
yu.c.chen@...el.com, youssefesmat@...omium.org,
joel@...lfernandes.org, efault@....de
Subject: [PATCH 14/17] sched/eevdf: Better handle mixed slice length
In the case where (due to latency-nice) there are different request
sizes in the tree, the smaller requests tend to be dominated by the
larger. Also note how the EEVDF lag limits are based on r_max.
Therefore; add a heuristic that for the mixed request size case, moves
smaller requests to placement strategy #2 which ensures they're
immidiately eligible and and due to their smaller (virtual) deadline
will cause preemption.
NOTE: this relies on update_entity_lag() to impose lag limits above
a single slice.
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
---
kernel/sched/fair.c | 14 ++++++++++++++
kernel/sched/features.h | 1 +
kernel/sched/sched.h | 1 +
3 files changed, 16 insertions(+)
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -616,6 +616,7 @@ avg_vruntime_add(struct cfs_rq *cfs_rq,
s64 key = entity_key(cfs_rq, se);
cfs_rq->avg_vruntime += key * weight;
+ cfs_rq->avg_slice += se->slice * weight;
cfs_rq->avg_load += weight;
}
@@ -626,6 +627,7 @@ avg_vruntime_sub(struct cfs_rq *cfs_rq,
s64 key = entity_key(cfs_rq, se);
cfs_rq->avg_vruntime -= key * weight;
+ cfs_rq->avg_slice -= se->slice * weight;
cfs_rq->avg_load -= weight;
}
@@ -4832,6 +4834,18 @@ place_entity(struct cfs_rq *cfs_rq, stru
lag = se->vlag;
/*
+ * For latency sensitive tasks; those that have a shorter than
+ * average slice and do not fully consume the slice, transition
+ * to EEVDF placement strategy #2.
+ */
+ if (sched_feat(PLACE_FUDGE) &&
+ cfs_rq->avg_slice > se->slice * cfs_rq->avg_load) {
+ lag += vslice;
+ if (lag > 0)
+ lag = 0;
+ }
+
+ /*
* If we want to place a task and preserve lag, we have to
* consider the effect of the new entity on the weighted
* average and compensate for this, otherwise lag can quickly
--- a/kernel/sched/features.h
+++ b/kernel/sched/features.h
@@ -5,6 +5,7 @@
* sleep+wake cycles. EEVDF placement strategy #1, #2 if disabled.
*/
SCHED_FEAT(PLACE_LAG, true)
+SCHED_FEAT(PLACE_FUDGE, true)
SCHED_FEAT(PLACE_DEADLINE_INITIAL, true)
/*
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -559,6 +559,7 @@ struct cfs_rq {
unsigned int idle_h_nr_running; /* SCHED_IDLE */
s64 avg_vruntime;
+ u64 avg_slice;
u64 avg_load;
u64 exec_clock;
Powered by blists - more mailing lists