lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201117092936.GA3121406@hirez.programming.kicks-ass.net>
Date:   Tue, 17 Nov 2020 10:29:36 +0100
From:   Peter Zijlstra <peterz@...radead.org>
To:     Will Deacon <will@...nel.org>
Cc:     Mel Gorman <mgorman@...hsingularity.net>,
        Davidlohr Bueso <dave@...olabs.net>,
        linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] sched: Fix data-race in wakeup

On Tue, Nov 17, 2020 at 09:15:46AM +0000, Will Deacon wrote:
> On Tue, Nov 17, 2020 at 09:30:16AM +0100, Peter Zijlstra wrote:

> > Subject: sched: Fix data-race in wakeup
> > From: Peter Zijlstra <peterz@...radead.org>
> > Date: Tue Nov 17 09:08:41 CET 2020
> > 
> > Mel reported that on some ARM64 platforms loadavg goes bananas and
> > tracked it down to the following data race:
> > 
> >   CPU0					CPU1
> > 
> >   schedule()
> >     prev->sched_contributes_to_load = X;
> >     deactivate_task(prev);
> > 
> > 					try_to_wake_up()
> > 					  if (p->on_rq &&) // false
> > 					  if (smp_load_acquire(&p->on_cpu) && // true
> > 					      ttwu_queue_wakelist())
> > 					        p->sched_remote_wakeup = Y;
> > 
> >     smp_store_release(prev->on_cpu, 0);
> 
> (nit: I suggested this race over at [1] ;)

Ah, I'll ammend and get you a Debugged-by line or something ;-)

> > where both p->sched_contributes_to_load and p->sched_remote_wakeup are
> > in the same word, and thus the stores X and Y race (and can clobber
> > one another's data).
> > 
> > Whereas prior to commit c6e7bd7afaeb ("sched/core: Optimize ttwu()
> > spinning on p->on_cpu") the p->on_cpu handoff serialized access to
> > p->sched_remote_wakeup (just as it still does with
> > p->sched_contributes_to_load) that commit broke that by calling
> > ttwu_queue_wakelist() with p->on_cpu != 0.
> > 
> > However, due to
> > 
> >   p->XXX			ttwu()
> >   schedule()			  if (p->on_rq && ...) // false
> >     smp_mb__after_spinlock()	  if (smp_load_acquire(&p->on_cpu) &&
> >     deactivate_task()		      ttwu_queue_wakelist())
> >       p->on_rq = 0;		        p->sched_remote_wakeup = X;
> > 
> > We can be sure any 'current' store is complete and 'current' is
> > guaranteed asleep. Therefore we can move p->sched_remote_wakeup into
> > the current flags word.
> > 
> > Note: while the observed failure was loadavg accounting gone wrong due
> > to ttwu() cobbering p->sched_contributes_to_load, the reverse problem
> > is also possible where schedule() clobbers p->sched_remote_wakeup,
> > this could result in enqueue_entity() wrecking ->vruntime and causing
> > scheduling artifacts.
> > 
> > Fixes: c6e7bd7afaeb ("sched/core: Optimize ttwu() spinning on p->on_cpu")
> > Reported-by: Mel Gorman <mgorman@...hsingularity.net>
> > Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
> > ---
> >  include/linux/sched.h |   13 ++++++++++++-
> >  1 file changed, 12 insertions(+), 1 deletion(-)
> > 
> > --- a/include/linux/sched.h
> > +++ b/include/linux/sched.h
> > @@ -775,7 +775,6 @@ struct task_struct {
> >  	unsigned			sched_reset_on_fork:1;
> >  	unsigned			sched_contributes_to_load:1;
> >  	unsigned			sched_migrated:1;
> > -	unsigned			sched_remote_wakeup:1;
> >  #ifdef CONFIG_PSI
> >  	unsigned			sched_psi_wake_requeue:1;
> >  #endif
> > @@ -785,6 +784,18 @@ struct task_struct {
> >  
> >  	/* Unserialized, strictly 'current' */
> >  
> > +	/*
> > +	 * p->in_iowait = 1;		ttwu()
> > +	 * schedule()			  if (p->on_rq && ..) // false
> > +	 *   smp_mb__after_spinlock();	  if (smp_load_acquire(&p->on_cpu) && //true
> > +	 *   deactivate_task()		      ttwu_queue_wakelist())
> > +	 *     p->on_rq = 0;			p->sched_remote_wakeup = X;
> > +	 *
> > +	 * Guarantees all stores of 'current' are visible before
> > +	 * ->sched_remote_wakeup gets used.
> 
> I'm still not sure this is particularly clear -- don't we want to highlight
> that the store of p->on_rq is unordered wrt the update to
> p->sched_contributes_to_load() in deactivate_task()?

I can explicitly call that out I suppose.

> I dislike bitfields with a passion, but the fix looks good:

I don't particularly hate them, they're just a flag field with names on
(in this case).

> Acked-by: Will Deacon <will@...nel.org>

Thanks!

> Now the million dollar question is why KCSAN hasn't run into this. Hrmph.

kernel/sched/Makefile:KCSAN_SANITIZE := n

might have something to do with that, I suppose.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ