lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201116125803.GB3121429@hirez.programming.kicks-ass.net>
Date:   Mon, 16 Nov 2020 13:58:03 +0100
From:   Peter Zijlstra <peterz@...radead.org>
To:     Mel Gorman <mgorman@...hsingularity.net>
Cc:     Will Deacon <will@...nel.org>, Davidlohr Bueso <dave@...olabs.net>,
        linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org
Subject: Re: Loadavg accounting error on arm64

On Mon, Nov 16, 2020 at 01:53:55PM +0100, Peter Zijlstra wrote:
> On Mon, Nov 16, 2020 at 11:49:38AM +0000, Mel Gorman wrote:
> > On Mon, Nov 16, 2020 at 09:10:54AM +0000, Mel Gorman wrote:
> > > I'll be looking again today to see can I find a mistake in the ordering for
> > > how sched_contributes_to_load is handled but again, the lack of knowledge
> > > on the arm64 memory model means I'm a bit stuck and a second set of eyes
> > > would be nice :(
> > > 
> > 
> > This morning, it's not particularly clear what orders the visibility of
> > sched_contributes_to_load exactly like other task fields in the schedule
> > vs try_to_wake_up paths. I thought the rq lock would have ordered them but
> > something is clearly off or loadavg would not be getting screwed. It could
> > be done with an rmb and wmb (testing and hasn't blown up so far) but that's
> > far too heavy.  smp_load_acquire/smp_store_release might be sufficient
> > on it although less clear if the arm64 gives the necessary guarantees.
> > 
> > (This is still at the chucking out ideas as I haven't context switched
> > back in all the memory barrier rules).
> 
> IIRC it should be so ordered by ->on_cpu.
> 
> We have:
> 
> 	schedule()
> 		prev->sched_contributes_to_load = X;
> 		smp_store_release(prev->on_cpu, 0);
> 
> 
> on the one hand, and:

Ah, my bad, ttwu() itself will of course wait for !p->on_cpu before we
even get here.

> 	sched_ttwu_pending()
> 		if (WARN_ON_ONCE(p->on_cpu))
> 			smp_cond_load_acquire(&p->on_cpu)
> 
> 		ttwu_do_activate()
> 			if (p->sched_contributes_to_load)
> 				...
> 
> on the other (for the remote case, which is the only 'interesting' one).

Also see the "Notes on Program-Order guarantees on SMP systems."
comment.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ