lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date: Tue, 14 May 2024 17:07:11 +0100
From: Qais Yousef <qyousef@...alina.io>
To: Vincent Guittot <vincent.guittot@...aro.org>
Cc: John Stultz <jstultz@...gle.com>, Ingo Molnar <mingo@...nel.org>,
	Peter Zijlstra <peterz@...radead.org>,
	Juri Lelli <juri.lelli@...hat.com>,
	Steven Rostedt <rostedt@...dmis.org>,
	Daniel Bristot de Oliveira <bristot@...hat.com>,
	Thomas Gleixner <tglx@...utronix.de>,
	"Paul E. McKenney" <paulmck@...nel.org>,
	Joel Fernandes <joel@...lfernandes.org>,
	Dietmar Eggemann <dietmar.eggemann@....com>,
	linux-kernel@...r.kernel.org, Yabin Cui <yabinc@...gle.com>
Subject: Re: [PATCH] sched/pi: Reweight fair_policy() tasks when inheriting
 prio

On 04/10/24 16:47, Qais Yousef wrote:
> On 04/10/24 11:13, Vincent Guittot wrote:
> 
> > > > Without cgroup, the solution could be straightforward but android uses
> > > > extensively cgroup AFAICT and update_cfs_group() makes impossible to
> > > > track the top cfs waiter and its "prio"
> > >
> > > :(
> > >
> > > IIUC the issue is that we can't easily come up with a single number of
> > > 'effective prio' for N level hierarchy and compare it with another M level
> > > hierarchy..
> > 
> > And then how do you apply it on the hierarchy ?
> 
> (I am not disagreeing with you, just trying to state the reasons more
> explicitly)
> 
> I think the application is easy, attach to the leaf cfs_rq? Which IIUC
> correctly what should happen with proxy execution, but by consuming the context
> of the donor directly without having explicitly to move the lock owner.
> 
> Finding out which hierarchy actually has the highest effective share is not
> straightforward I agree. And if we combine a potential operation of something
> that could move any waiting task to a different hierarchy at anytime, this gets
> even more complex.
> 
> I need to go and find more info, but seems Windows has some kind of boost
> mechanism to help the lock owner to release the lock faster. I wonder if
> something like that could help as interim solution. What we could do is move
> the task to root group as a boost with the simple reweight operation proposed
> here applied. As soon as it releases the lock we should restore it.
> 
> From what I heard in Windows this boost happens randomly (don't quote me on
> this). I am not sure could be our trigger mechanism. We sure don't want to do
> this unconditionally otherwise we break fairness.
> 
> Maybe there are easier ways to introduce a simple such boost mechanism..

FWIW, trying to find the top-fair-waiter and that wasn't as trivial. I needed
to refactor a fair bit of the code that expects the top-waiter to be the
leftmost..

And I haven't looked at that temporary boost mechanism for cfs. Maybe I'll try
that if I get a chance. For the time being, I got this bandaid if anybody is
interested in a temporary 'solution'

--->8---

>From 7169519792f11a73f861a41dd7d5c9151dc44dd7 Mon Sep 17 00:00:00 2001
From: Qais Yousef <qyousef@...alina.io>
Date: Mon, 1 Apr 2024 03:04:00 +0100
Subject: [PATCH] sched/pi: Reweight fair_policy() tasks when inheriting prio

For fair tasks inheriting the priority (nice) without reweighting is
a NOP as the task's share won't change.

This is visible when running with PTHREAD_PRIO_INHERIT where fair tasks
with low priority values are susceptible to starvation leading to PI
like impact on lock contention.

The logic in rt_mutex will reset these low priority fair tasks into nice
0, but without the additional reweight operation to actually update the
weights, it doesn't have the desired impact of boosting them to allow
them to run sooner/longer to release the lock.

Apply the reweight for fair_policy() tasks to achieve the desired boost
for those low nice values tasks. Note that boost here means resetting
their nice to 0; as this is what the current logic does for fair tasks.

We need to re-instate ordering fair tasks by their priority order on the
waiter tree to ensure we inherit the top_waiter properly.

Handling of idle_policy() requires more code refactoring and is not
handled yet. idle_policy() are treated specially and only run when the
CPU is idle and get a hardcoded low weight value. Changing weights won't
be enough without a promotion first to SCHED_OTHER.

Tested with a test program that creates three threads.

	1. main thread that spawns high prio and low prio task and busy
	   loops

	2. low priority thread that holds a pthread_mutex() with
	   PTHREAD_PRIO_INHERIT protocol. Runs at nice +10. Busy loops
	   after holding the lock.

	3. high priority thread that holds a pthread_mutex() with
	   PTHREADPTHREAD_PRIO_INHERIT, but made to start after the low
	   priority thread. Runs at nice 0. Should remain blocked by the
	   low priority thread.

All tasks are pinned to CPU0.

Without the patch I can see the low priority thread running only for
~10% of the time which is what expected without it being boosted.

With the patch the low priority thread runs for ~50% which is what
expected if it gets boosted to nice 0.

I modified the test program logic afterwards to ensure that after
releasing the lock the low priority thread goes back to running for 10%
of the time, and it does.

Reported-by: Yabin Cui <yabinc@...gle.com>
Signed-off-by: Qais Yousef <qyousef@...alina.io>
---
 kernel/locking/rtmutex.c | 7 +------
 kernel/sched/core.c      | 4 +++-
 2 files changed, 4 insertions(+), 7 deletions(-)

diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c
index 88d08eeb8bc0..4e155862aba1 100644
--- a/kernel/locking/rtmutex.c
+++ b/kernel/locking/rtmutex.c
@@ -345,12 +345,7 @@ static __always_inline bool unlock_rt_mutex_safe(struct rt_mutex_base *lock,
 
 static __always_inline int __waiter_prio(struct task_struct *task)
 {
-	int prio = task->prio;
-
-	if (!rt_prio(prio))
-		return DEFAULT_PRIO;
-
-	return prio;
+	return task->prio;
 }
 
 /*
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 1a914388144a..f22db270b0d9 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -7242,8 +7242,10 @@ void rt_mutex_setprio(struct task_struct *p, struct task_struct *pi_task)
 	} else {
 		if (dl_prio(oldprio))
 			p->dl.pi_se = &p->dl;
-		if (rt_prio(oldprio))
+		else if (rt_prio(oldprio))
 			p->rt.timeout = 0;
+		else if (!task_has_idle_policy(p))
+			reweight_task(p, prio - MAX_RT_PRIO);
 	}
 
 	__setscheduler_prio(p, prio);
-- 
2.34.1


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ