lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240814055330.GA22686@noisy.programming.kicks-ass.net>
Date: Wed, 14 Aug 2024 07:53:30 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Valentin Schneider <vschneid@...hat.com>
Cc: mingo@...hat.com, juri.lelli@...hat.com, vincent.guittot@...aro.org,
	dietmar.eggemann@....com, rostedt@...dmis.org, bsegall@...gle.com,
	mgorman@...e.de, linux-kernel@...r.kernel.org,
	kprateek.nayak@....com, wuyun.abel@...edance.com,
	youssefesmat@...omium.org, tglx@...utronix.de, efault@....de
Subject: Re: [PATCH 12/24] sched/fair: Prepare exit/cleanup paths for
 delayed_dequeue

On Wed, Aug 14, 2024 at 12:07:57AM +0200, Peter Zijlstra wrote:
> On Tue, Aug 13, 2024 at 11:54:21PM +0200, Peter Zijlstra wrote:
> > On Tue, Aug 13, 2024 at 02:43:47PM +0200, Valentin Schneider wrote:
> > > On 27/07/24 12:27, Peter Zijlstra wrote:
> > > > @@ -12817,10 +12830,26 @@ static void attach_task_cfs_rq(struct ta
> > > >  static void switched_from_fair(struct rq *rq, struct task_struct *p)
> > > >  {
> > > >       detach_task_cfs_rq(p);
> > > > +	/*
> > > > +	 * Since this is called after changing class, this isn't quite right.
> > > > +	 * Specifically, this causes the task to get queued in the target class
> > > > +	 * and experience a 'spurious' wakeup.
> > > > +	 *
> > > > +	 * However, since 'spurious' wakeups are harmless, this shouldn't be a
> > > > +	 * problem.
> > > > +	 */
> > > > +	p->se.sched_delayed = 0;
> > > > +	/*
> > > > +	 * While here, also clear the vlag, it makes little sense to carry that
> > > > +	 * over the excursion into the new class.
> > > > +	 */
> > > > +	p->se.vlag = 0;
> > > 
> > > RQ lock is held, the task can't be current if it's ->sched_delayed; is a
> > > dequeue_task() not possible at this point?  Or just not worth it?
> > 
> > Hurmph, I really can't remember why I did it like this :-(
> 
> Obviously I remember it right after hitting send...
> 
> We've just done:
> 
> 	dequeue_task();
> 	p->sched_class = some_other_class;
> 	enqueue_task();
> 
> IOW, we're enqueued as some other class at this point. There is no way
> we can fix it up at this point.

With just a little more sleep than last night, perhaps you're right
after all. Yes we're on a different class, but we can *still* dequeue it
again.


That is, something like the below ... I'll stick it on and see if
anything falls over.

---
 kernel/sched/fair.c | 22 +++++++++-------------
 1 file changed, 9 insertions(+), 13 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 714826d97ef2..53c8f3ccfd0c 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -13105,20 +13105,16 @@ static void switched_from_fair(struct rq *rq, struct task_struct *p)
 {
 	detach_task_cfs_rq(p);
 	/*
-	 * Since this is called after changing class, this isn't quite right.
-	 * Specifically, this causes the task to get queued in the target class
-	 * and experience a 'spurious' wakeup.
-	 *
-	 * However, since 'spurious' wakeups are harmless, this shouldn't be a
-	 * problem.
-	 */
-	p->se.sched_delayed = 0;
-	/*
-	 * While here, also clear the vlag, it makes little sense to carry that
-	 * over the excursion into the new class.
+	 * Since this is called after changing class, this is a little weird
+	 * and we cannot use DEQUEUE_DELAYED.
 	 */
-	p->se.vlag = 0;
-	p->se.rel_deadline = 0;
+	if (p->se.sched_delayed) {
+		dequeue_task(DEQUEUE_NOCLOCK, DEQUEUE_SLEEP);
+		p->se.sched_delayed = 0;
+		p->se.rel_deadline = 0;
+		if (sched_feat(DELAY_ZERO) && p->se.vlag > 0)
+			p->se.vlag = 0;
+	}
 }
 
 static void switched_to_fair(struct rq *rq, struct task_struct *p)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ