lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 12 Dec 2013 12:10:15 -0500
From:	Steven Rostedt <rostedt@...dmis.org>
To:	Steven Rostedt <rostedt@...dmis.org>
Cc:	Peter Zijlstra <peterz@...radead.org>,
	Paul Gortmaker <paul.gortmaker@...driver.com>,
	Thomas Gleixner <tglx@...utronix.de>,
	Ingo Molnar <mingo@...nel.org>,
	Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/3] wait-simple: Introduce the simple waitqueue
 implementation

On Thu, 12 Dec 2013 11:51:37 -0500
Steven Rostedt <rostedt@...dmis.org> wrote:

> > 
> > Typically such a barrier comes from set_current_state(), the normal
> > pattern is something like:
> > 
> >   set_current_state(TASK_UNINTERRUPTIBLE)
> >   if (!cond)
> >   	schedule();
> >   __set_current_state(TASK_RUNNING);
> > 
> > vs
> > 
> >   cond = true;
> >   wake_up_process(&foo);
> 
> Hmm, that __set_current_state() in swait_prepare() does indeed seem
> buggy. I'm surprised that I didn't catch that, as I'm usually a
> stickler with set_current_state() (and I'm very paranoid when it comes
> to using __set_current_state()).
> 
> I'll have to dig deeper to see why I didn't change that.

OK, looking at my irc logs discussing this with Paul McKenney, this was
an optimization:

<rostedt> as set_current_state() may be too big of a heavy weight
<rostedt> It's usually to synchronize between wake ups and state stet
<rostedt> set
<rostedt> but both the set state and the wakeup is within the same spin
lock


So if we break up your code above, we have:

	raw_spin_lock_irqsave(&head->lock, flags);
	w->task = current;
	if (list_empty(&w->node)) {
		list_add(&w->node, &head->list);
		smp_mb();
	}
	__set_current_state(state);
	raw_spin_unlock_irqrestore(&head->lock, flags);

	if (!cond)
		schedule();


vs

	cond = true;

	raw_spin_lock_irqsave(&head->lock, flags);
	woken = __swait_wake_locked(head, state, num);
	raw_spin_unlock_irqrestore(&head->lock, flags);


That is, the change of state with respect to the list is synchronized
by the head->lock. We only need to synchronize seeing the condition
with the adding to the list. Once we are on the list, we get woken up
regardless.

But I think this is a micro optimization, and probably still buggy, as
I can imagine a race if we are already on the list, and we don't call
the memory barrier and miss seeing the condition after being woken up
and resetting ourselves back to a sleeping state.

Paul,

You can remove the smp_mb() from __swait_enqueue() and instead replace
the __set_current_state() to set_current_state() in
swait_prepare_locked().

Thanks!

-- Steve
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ