lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1401729299.2204.27.camel@dabdike.int.hansenpartnership.com>
Date:	Mon, 02 Jun 2014 10:14:59 -0700
From:	James Bottomley <James.Bottomley@...senPartnership.com>
To:	paulmck@...ux.vnet.ibm.com
Cc:	Peter Zijlstra <peterz@...radead.org>,
	Mikulas Patocka <mpatocka@...hat.com>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	jejb@...isc-linux.org, deller@....de,
	John David Anglin <dave.anglin@...l.net>,
	linux-parisc@...r.kernel.org, linux-kernel@...r.kernel.org,
	chegu_vinod@...com, Waiman.Long@...com, tglx@...utronix.de,
	riel@...hat.com, akpm@...ux-foundation.org, davidlohr@...com,
	hpa@...or.com, andi@...stfloor.org, aswin@...com,
	scott.norton@...com, Jason Low <jason.low2@...com>
Subject: Re: [PATCH v2] introduce atomic_pointer to fix a race condition in
 cancelable mcs spinlocks

On Mon, 2014-06-02 at 09:43 -0700, Paul E. McKenney wrote:
> On Mon, Jun 02, 2014 at 06:25:25PM +0200, Peter Zijlstra wrote:
> > On Mon, Jun 02, 2014 at 12:00:45PM -0400, Mikulas Patocka wrote:
> > >  struct optimistic_spin_queue {
> > > -	struct optimistic_spin_queue *next, *prev;
> > > +	atomic_pointer(struct optimistic_spin_queue *) next;
> > > +	struct optimistic_spin_queue *prev;
> > >  	int locked; /* 1 if lock acquired */
> > >  };
> > >  
> > > Index: linux-3.15-rc8/include/asm-generic/atomic-long.h
> > > ===================================================================
> > > --- linux-3.15-rc8.orig/include/asm-generic/atomic-long.h	2014-06-02 17:11:17.000000000 +0200
> > > +++ linux-3.15-rc8/include/asm-generic/atomic-long.h	2014-06-02 17:11:50.000000000 +0200
> > > @@ -255,4 +255,31 @@ static inline long atomic_long_add_unles
> > >  
> > >  #endif  /*  BITS_PER_LONG == 64  */
> > >  
> > > +#define atomic_pointer(type)						\
> > > +union {									\
> > > +	atomic_long_t __a;						\
> > > +	type __t;							\
> > > +	char __check_sizeof[sizeof(type) == sizeof(long) ? 1 : -1];	\
> > > +}
> > 
> > That's still entirely disgusting, and afaict entirely redundant. You can
> > do that test in the operators below just fine.
> > 
> > > +#define ATOMIC_POINTER_INIT(i)	{ .__t = (i) }
> > > +
> > > +#define atomic_pointer_read(v)	((typeof((v)->__t))atomic_long_read(&(v)->__a))
> > > +
> > > +#define atomic_pointer_set(v, i)		({			\
> > > +	typeof((v)->__t) __i = (i);					\
> > > +	atomic_long_set(&(v)->__a, (long)(__i));			\
> > > +})
> > > +
> > > +#define atomic_pointer_xchg(v, i)		({			\
> > > +	typeof((v)->__t) __i = (i);					\
> > > +	(typeof((v)->__t))atomic_long_xchg(&(v)->__a, (long)(__i));	\
> > > +})
> > > +
> > > +#define atomic_pointer_cmpxchg(v, old, new)	({			\
> > > +	typeof((v)->__t) __old = (old);					\
> > > +	typeof((v)->__t) __new = (new);					\
> > > +	(typeof((v)->__t))atomic_long_cmpxchg(&(v)->__a, (long)(__old), (long)(__new));\
> > > +})
> > 
> > And I can't say I'm a particular fan of these ops either, as alternative
> > I'm almost inclined to just exclude parisc from using opt spinning.
> 
> That is an excellent point for this particular issue.  Do parisc systems
> really support enough CPUs to make queued spinlocks worthwhile?  If not,
> maybe we should just have parisc stick with traditional spinlocks.

Yes and No.  No for Linux because the only hyper CPU system is the
superdome, which we've never managed to boot linux on (it has some
complexities in the Bus architecture) and we're not likely to try
because the installations tend to cost north of US$1m.  For the Server
systems we do have a few high CPU count ones, but we lost access to them
when HP dismantled the parisc linux lab.  Currently the standard is
about 4 cpus.

I think just not using queued spinlocks is fine for us.  Should anyone
ever try the large CPU systems, we can revisit.

James


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ