lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed,  7 Jul 2021 14:34:02 +0200
From:   Christian Borntraeger <borntraeger@...ibm.com>
To:     peterz@...radead.org
Cc:     borntraeger@...ibm.com, bristot@...hat.com, bsegall@...gle.com,
        dietmar.eggemann@....com, joshdon@...gle.com,
        juri.lelli@...hat.com, kvm@...r.kernel.org,
        linux-kernel@...r.kernel.org, linux-s390@...r.kernel.org,
        linux@...musvillemoes.dk, mgorman@...e.de, mingo@...nel.org,
        rostedt@...dmis.org, valentin.schneider@....com,
        vincent.guittot@...aro.org
Subject: [PATCH 1/1] sched/fair: improve yield_to vs fairness

After some debugging in situations where a smaller sched_latency_ns and
smaller sched_migration_cost settings helped for KVM host, I was able to
come up with a reduced testcase.
This testcase has 2 vcpus working on a shared memory location and
waiting for mem % 2 == cpu number to then do an add on the shared
memory.
To start simple I pinned all vcpus to one host CPU. Without the
yield_to in KVM the testcase was horribly slow. This is expected as each
vcpu will spin a whole time slice. With the yield_to from KVM things are
much better, but I was still seeing yields being ignored.
In the end pick_next_entity decided to keep the current process running
due to fairness reasons.  On this path we really know that there is no
point in continuing current. So let us make things a bit unfairer to
current.
This makes the reduced testcase noticeable faster. It improved a more
realistic test case (many guests on some host CPUs with overcomitment)
even more.
In the end this is similar to the old compat_sched_yield approach with
an important difference:
Instead of doing it for all yields we now only do it for yield_to
a place where we really know that current it waiting for the target.

What are alternative implementations for this patch
- do the same as the old compat_sched_yield:
  current->vruntime = rightmost->vruntime+1
- provide a new tunable sched_ns_yield_penalty: how much vruntime to add
  (could be per architecture)
- also fiddle with the vruntime of the target
  e.g. subtract from the target what we add to the source

Signed-off-by: Christian Borntraeger <borntraeger@...ibm.com>
---
 kernel/sched/fair.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 23663318fb81..4f661a9ed3ba 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7337,6 +7337,7 @@ static void yield_task_fair(struct rq *rq)
 static bool yield_to_task_fair(struct rq *rq, struct task_struct *p)
 {
 	struct sched_entity *se = &p->se;
+	struct sched_entity *curr = &rq->curr->se;
 
 	/* throttled hierarchies are not runnable */
 	if (!se->on_rq || throttled_hierarchy(cfs_rq_of(se)))
@@ -7347,6 +7348,16 @@ static bool yield_to_task_fair(struct rq *rq, struct task_struct *p)
 
 	yield_task_fair(rq);
 
+	/*
+	 * This path is special and only called from KVM. In contrast to yield,
+	 * in yield_to we really know that current is spinning and we know
+	 * (s390) or have good heuristics whom are we waiting for. There is
+	 * absolutely no point in continuing the current task, even if this
+	 * means to become unfairer. Let us give the current process some
+	 * "fake" penalty.
+	 */
+	curr->vruntime += sched_slice(cfs_rq_of(curr), curr);
+
 	return true;
 }
 
-- 
2.31.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ