lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <54E4E479.4050003@colorfullife.com>
Date:	Wed, 18 Feb 2015 20:14:01 +0100
From:	Manfred Spraul <manfred@...orfullife.com>
To:	Oleg Nesterov <oleg@...hat.com>,
	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
CC:	Peter Zijlstra <peterz@...radead.org>,
	Kirill Tkhai <ktkhai@...allels.com>,
	linux-kernel@...r.kernel.org, Ingo Molnar <mingo@...hat.com>,
	Josh Poimboeuf <jpoimboe@...hat.com>
Subject: Re: [PATCH 2/2] [PATCH] sched: Add smp_rmb() in task rq locking cycles

Hi Oleg,

On 02/18/2015 04:59 PM, Oleg Nesterov wrote:
> Let's look at sem_lock(). I never looked at this code before, I can be
> easily wrong. Manfred will correct me. But at first glance we can write
> the oversimplified pseudo-code:
>
> 	spinlock_t local, global;
>
> 	bool my_lock(bool try_local)
> 	{
> 		if (try_local) {
> 			spin_lock(&local);
> 			if (!spin_is_locked(&global))
> 				return true;
> 			spin_unlock(&local);
> 		}
>
> 		spin_lock(&global);
> 		spin_unlock_wait(&local);
> 		return false;
> 	}
>
> 	void my_unlock(bool drop_local)
> 	{
> 		if (drop_local)
> 			spin_unlock(&local);
> 		else
> 			spin_unlock(&global);
> 	}
>
> it assumes that the "local" lock is cheaper than "global", the usage is
>
> 	bool xxx = my_lock(condition);
> 	/* CRITICAL SECTION */
> 	my_unlock(xxx);
>
> Now. Unless I missed something, my_lock() does NOT need a barrier BEFORE
> spin_unlock_wait() (or spin_is_locked()). Either my_lock(true) should see
> spin_is_locked(global) == T, or my_lock(false)->spin_unlock_wait() should
> see that "local" is locked and wait.
I would agree:
There is no need for a barrier. spin_unlock_read() is just a read, the 
barriers are from spin_lock() and spin_unlock().

The barrier exist to protect something like a "force_global" flag 
(complex_count)

> spinlock_t local, global;
> bool force_global;
> bool my_lock(bool try_local)
> {
> 	if (try_local) {
> 		spin_lock(&local);
> 		if (!spin_is_locked(&global)) {
> 			if (!force_global) {
> 				return true;
> 			}
> 		}
> 		spin_unlock(&local);
>
>
> 		spin_lock(&global);
> 		spin_unlock_wait(&local);
> 		return false;
> 	}
>
> 	void my_unlock(bool drop_local)
> 	{
> 		if (drop_local)
> 			spin_unlock(&local);
> 		else
> 			spin_unlock(&global);
> 	}
> }

force_global can only be set by the owner of &global.

> Another question is do we need a barrier AFTER spin_unlock_wait(). I do not
> know what ipc/sem.c actually needs, but in general (I think) this does need
> mb(). Otherwise my_lock / my_unlock itself does not have the proper acq/rel
> semantics. For example, my_lock(false) can miss the changes which were done
> under my_lock(true).
How could that happen?
I thought that
thread A:
	protected_var = 1234;
	spin_unlock(&lock_a)
	
thread B:
	spin_lock(&lock_b)
	if (protected_var)
  
is safe. i.e, there is no need that acquire and releases is done on the same pointer.

--
	Manfred


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ