lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201116164928.GF3121392@hirez.programming.kicks-ass.net>
Date:   Mon, 16 Nov 2020 17:49:28 +0100
From:   Peter Zijlstra <peterz@...radead.org>
To:     Mel Gorman <mgorman@...hsingularity.net>
Cc:     Will Deacon <will@...nel.org>, Davidlohr Bueso <dave@...olabs.net>,
        linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org
Subject: Re: Loadavg accounting error on arm64

On Mon, Nov 16, 2020 at 03:29:46PM +0000, Mel Gorman wrote:
> On Mon, Nov 16, 2020 at 01:58:03PM +0100, Peter Zijlstra wrote:

> > > 	sched_ttwu_pending()
> > > 		if (WARN_ON_ONCE(p->on_cpu))
> > > 			smp_cond_load_acquire(&p->on_cpu)
> > > 
> > > 		ttwu_do_activate()
> > > 			if (p->sched_contributes_to_load)
> > > 				...
> > > 
> > > on the other (for the remote case, which is the only 'interesting' one).
> > 
> 
> But this side is interesting because I'm having trouble convincing
> myself it's 100% correct for sched_contributes_to_load. The write of
> prev->sched_contributes_to_load in the schedule() path has a big gap
> before it hits the smp_store_release(prev->on_cpu).
> 
> On the ttwu path, we have
> 
>         if (smp_load_acquire(&p->on_cpu) &&
>             ttwu_queue_wakelist(p, task_cpu(p), wake_flags | WF_ON_CPU))
>                 goto unlock;
> 
> 	ttwu_queue_wakelist queues task on the wakelist, sends IPI
> 	and on the receiver side it calls ttwu_do_activate and reads
> 	sched_contributes_to_load
> 
> sched_ttwu_pending() is not necessarily using the same rq lock so no
> protection there. The smp_load_acquire() has just been hit but it still
> leaves a gap between when sched_contributes_to_load is written and a
> parallel read of sched_contributes_to_load.
> 
> So while we might be able to avoid a smp_rmb() before the read of
> sched_contributes_to_load and rely on p->on_cpu ordering there,
> we may still need a smp_wmb() after nr_interruptible() increments
> instead of waiting until the smp_store_release() is hit while a task
> is scheduling. That would be a real memory barrier on arm64 and a plain
> compiler barrier on x86-64.

I'm mighty confused by your words here; and the patch below. What actual
scenario are you worried about?

If we take the WF_ON_CPU path, we IPI the CPU the task is ->on_cpu on.
So the IPI lands after the schedule() that clears ->on_cpu on the very
same CPU.

> 
> > Also see the "Notes on Program-Order guarantees on SMP systems."
> > comment.
> 
> I did, it was the on_cpu ordering for the blocking case that had me
> looking at the smp_store_release and smp_cond_load_acquire in arm64 in
> the first place thinking that something in there must be breaking the
> on_cpu ordering. I'm re-reading it every so often while trying to figure
> out where the gap is or whether I'm imagining things.
> 
> Not fully tested but did not instantly break either
> 
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index d2003a7d5ab5..877eaeba45ac 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -4459,14 +4459,26 @@ static void __sched notrace __schedule(bool preempt)
>  		if (signal_pending_state(prev_state, prev)) {
>  			prev->state = TASK_RUNNING;
>  		} else {
> -			prev->sched_contributes_to_load =
> +			int acct_load =
>  				(prev_state & TASK_UNINTERRUPTIBLE) &&
>  				!(prev_state & TASK_NOLOAD) &&
>  				!(prev->flags & PF_FROZEN);
>  
> -			if (prev->sched_contributes_to_load)
> +			prev->sched_contributes_to_load = acct_load;
> +			if (acct_load) {
>  				rq->nr_uninterruptible++;
>  
> +				/*
> +				 * Pairs with p->on_cpu ordering, either a
> +				 * smp_load_acquire or smp_cond_load_acquire
> +				 * in the ttwu path before ttwu_do_activate
> +				 * p->sched_contributes_to_load. It's only
> +				 * after the nr_interruptible update happens
> +				 * that the ordering is critical.
> +				 */
> +				smp_wmb();
> +			}

Sorry, I can't follow, at all.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ