lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20151112150058.GA30321@redhat.com>
Date:	Thu, 12 Nov 2015 16:00:58 +0100
From:	Oleg Nesterov <oleg@...hat.com>
To:	Boqun Feng <boqun.feng@...il.com>
Cc:	Peter Zijlstra <peterz@...radead.org>, mingo@...nel.org,
	linux-kernel@...r.kernel.org, paulmck@...ux.vnet.ibm.com,
	corbet@....net, mhocko@...nel.org, dhowells@...hat.com,
	torvalds@...ux-foundation.org, will.deacon@....com,
	Michael Ellerman <mpe@...erman.id.au>,
	Benjamin Herrenschmidt <benh@...nel.crashing.org>,
	Paul Mackerras <paulus@...ba.org>
Subject: Re: [PATCH 4/4] locking: Introduce smp_cond_acquire()

On 11/12, Boqun Feng wrote:
>
> On Wed, Nov 11, 2015 at 08:39:53PM +0100, Oleg Nesterov wrote:
> >
> > 	object_t *object;
> > 	spinlock_t lock;
> >
> > 	void update(void)
> > 	{
> > 		object_t *o;
> >
> > 		spin_lock(&lock);
> > 		o = READ_ONCE(object);
> > 		if (o) {
> > 			BUG_ON(o->dead);
> > 			do_something(o);
> > 		}
> > 		spin_unlock(&lock);
> > 	}
> >
> > 	void destroy(void) // can be called only once, can't race with itself
> > 	{
> > 		object_t *o;
> >
> > 		o = object;
> > 		object = NULL;
> >
> > 		/*
> > 		 * pairs with lock/ACQUIRE. The next update() must see
> > 		 * object == NULL after spin_lock();
> > 		 */
> > 		smp_mb();
> >
> > 		spin_unlock_wait(&lock);
> >
> > 		/*
> > 		 * pairs with unlock/RELEASE. The previous update() has
> > 		 * already passed BUG_ON(o->dead).
> > 		 *
> > 		 * (Yes, yes, in this particular case it is not needed,
> > 		 *  we can rely on the control dependency).
> > 		 */
> > 		smp_mb();
> >
> > 		o->dead = true;
> > 	}
> >
> > I believe the code above is correct and it needs the barriers on both sides.
> >
>
> Hmm.. probably incorrect.. because the ACQUIRE semantics of spin_lock()
> only guarantees that the memory operations following spin_lock() can't
> be reorder before the *LOAD* part of spin_lock() not the *STORE* part,
> i.e. the case below can happen(assuming the spin_lock() is implemented
> as ll/sc loop)
>
> 	spin_lock(&lock):
> 	  r1 = *lock; // LL, r1 == 0
> 	o = READ_ONCE(object); // could be reordered here.
> 	  *lock = 1; // SC
>
> This could happen because of the ACQUIRE semantics of spin_lock(), and
> the current implementation of spin_lock() on PPC allows this happen.
>
> (Cc PPC maintainers for their opinions on this one)

In this case the code above is obviously wrong. And I do not understand
how we can rely on spin_unlock_wait() then.

And afaics do_exit() is buggy too then, see below.

> I think it's OK for it as an ACQUIRE(with a proper barrier) or even just
> a control dependency to pair with spin_unlock(), for example, the
> following snippet in do_exit() is OK, except the smp_mb() is redundant,
> unless I'm missing something subtle:
>
> 	/*
> 	 * The setting of TASK_RUNNING by try_to_wake_up() may be delayed
> 	 * when the following two conditions become true.
> 	 *   - There is race condition of mmap_sem (It is acquired by
> 	 *     exit_mm()), and
> 	 *   - SMI occurs before setting TASK_RUNINNG.
> 	 *     (or hypervisor of virtual machine switches to other guest)
> 	 *  As a result, we may become TASK_RUNNING after becoming TASK_DEAD
> 	 *
> 	 * To avoid it, we have to wait for releasing tsk->pi_lock which
> 	 * is held by try_to_wake_up()
> 	 */
> 	smp_mb();
> 	raw_spin_unlock_wait(&tsk->pi_lock);

Perhaps it is me who missed something. But I don't think we can remove
this mb(). And at the same time it can't help on PPC if I understand
your explanation above correctly.

To simplify, lets ignore exit_mm/down_read/etc. The exiting task does


	current->state = TASK_UNINTERRUPTIBLE;
	// without schedule() in between
	current->state = TASK_RUNNING;

	smp_mb();
	spin_unlock_wait(pi_lock);

	current->state = TASK_DEAD;
	schedule();

and we need to ensure that if we race with try_to_wake_up(TASK_UNINTERRUPTIBLE)
it can't change TASK_DEAD back to RUNNING.

Without smp_mb() this can be reordered, spin_unlock_wait(pi_locked) can
read the old "unlocked" state of pi_lock before we set UNINTERRUPTIBLE,
so in fact we could have

	current->state = TASK_UNINTERRUPTIBLE;
	
	spin_unlock_wait(pi_lock);

	current->state = TASK_RUNNING;

	current->state = TASK_DEAD;

and this can obviously race with ttwu() which can take pi_lock and see
state == TASK_UNINTERRUPTIBLE after spin_unlock_wait().

And, if I understand you correctly, this smp_mb() can't help on PPC.
try_to_wake_up() can read task->state before it writes to *pi_lock.
To me this doesn't really differ from the code above,

	CPU 1 (do_exit)				CPU_2 (ttwu)

						spin_lock(pi_lock):
						  r1 = *pi_lock; // r1 == 0;
	p->state = TASK_UNINTERRUPTIBLE;
						state = p->state;
	p->state = TASK_RUNNING;
	mb();
	spin_unlock_wait();
						*pi_lock = 1;

	p->state = TASK_DEAD;
						if (state & TASK_UNINTERRUPTIBLE) // true
							p->state = RUNNING;

No?

And smp_mb__before_spinlock() looks wrong too then.

Oleg.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ