lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20151006160450.GS3604@twins.programming.kicks-ass.net>
Date:	Tue, 6 Oct 2015 18:04:50 +0200
From:	Peter Zijlstra <peterz@...radead.org>
To:	Oleg Nesterov <oleg@...hat.com>
Cc:	Boqun Feng <boqun.feng@...il.com>,
	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
	linux-kernel@...r.kernel.org, linux-doc@...r.kernel.org,
	Jonathan Corbet <corbet@....net>,
	Michal Hocko <mhocko@...nel.org>,
	David Howells <dhowells@...hat.com>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Will Deacon <will.deacon@....com>
Subject: Re: [PATCH] Documentation: Remove misleading examples of the
 barriers in wake_*()

On Mon, Sep 21, 2015 at 07:46:11PM +0200, Oleg Nesterov wrote:
> On 09/18, Peter Zijlstra wrote:
> >
> > the text is correct, right?
> 
> Yes, it looks good to me and helpful.
> 
> But damn. I forgot why exactly try_to_wake_up() needs rmb() after
> ->on_cpu check... It looks reasonable in any case, but I do not
> see any strong reason immediately.

I read it like the smp_rmb() we have for
acquire__after_spin_is_unlocked. Except, as you note below, we need to
need an smp_read_barrier_depends for control barriers as well....

(I'm starting to think we're having more control deps what we were
thinking...)

--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1947,7 +1947,13 @@ try_to_wake_up(struct task_struct *p, un
 	while (p->on_cpu)
 		cpu_relax();
 	/*
-	 * Pairs with the smp_wmb() in finish_lock_switch().
+	 * Combined with the control dependency above, we have an effective
+	 * smp_load_acquire() without the need for full barriers.
+	 *
+	 * Pairs with the smp_store_release() in finish_lock_switch().
+	 *
+	 * This ensures that tasks getting woken will be fully ordered against
+	 * their previous state and preserve Program Order.
 	 */
 	smp_rmb();
 
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1073,6 +1073,9 @@ static inline void finish_lock_switch(st
 	 * We must ensure this doesn't happen until the switch is completely
 	 * finished.
 	 *
+	 * In particular, the load of prev->state in finish_task_switch() must
+	 * happen before this.
+	 *
 	 * Pairs with the control dependency and rmb in try_to_wake_up().
 	 */
 	smp_store_release(&prev->on_cpu, 0);


Updates the comments to clarify the release/acquire pair on p->on_cpu.

> Say,
> 
> 	p->sched_contributes_to_load = !!task_contributes_to_load(p);
> 	p->state = TASK_WAKING;
> 
> we can actually do this before "while (p->on_cpu)", afaics. However
> we must not do this before the previous p->on_rq check.

No, we must not touch the task before p->on_cpu is cleared, up until
that point the task is owned by the 'previous' CPU.

> So perhaps this rmb() helps to ensure task_contributes_to_load() can't
> happen before p->on_rq check...
> 
> As for "p->state = TASK_WAKING" we have the control dependency in both
> cases. But the modern fashion suggests to use _CTRL().

Yes, but I'm not sure we should go write:

	while (READ_ONCE_CTRL(p->on_cpu))
		cpu_relax();

Or:

	while (p->on_cpu)
		cpu_relax();

	smp_read_barrier_depends();

It seems to me that doing the smp_mb() (for Alpha) inside the loop might
be sub-optimal.

That said, it would be good if Paul (or anyone really) can explain to me
the reason for: 5af4692a75da ("smp: Make control dependencies work on
Alpha, improve documentation"). The Changelog simply states that Alpha
needs the mb, but not how/why etc.

> Although cpu_relax()
> should imply barrier(), but afaik this is not documented.

I think we're relying on that in many places..

> In short, I got lost ;) Now I don't even understand why we do not need
> another rmb() between p->on_rq and p->on_cpu. Suppose a thread T does
> 
> 	set_current_state(...);
> 	schedule();
> 
> it can be preempted in between, after that we have "on_rq && !on_cpu".
> Then it gets CPU again and calls schedule() which clears on_rq.
> 
> What guarantees that if ttwu() sees on_rq == 0 cleared by schedule()
> then it can _not_ still see the old value of on_cpu == 0?

Right, let me go have a think about that ;-)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ