lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200708150930.GA147323@debian-buster-darwi.lab.linutronix.de>
Date:   Wed, 8 Jul 2020 17:09:30 +0200
From:   "Ahmed S. Darwish" <a.darwish@...utronix.de>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     Ingo Molnar <mingo@...hat.com>, Will Deacon <will@...nel.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        "Paul E. McKenney" <paulmck@...nel.org>,
        "Sebastian A. Siewior" <bigeasy@...utronix.de>,
        Steven Rostedt <rostedt@...dmis.org>,
        LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v3 06/20] seqlock: Extend seqcount API with associated
 locks

On Wed, Jul 08, 2020 at 02:29:38PM +0200, Peter Zijlstra wrote:
> On Wed, Jul 08, 2020 at 12:33:14PM +0200, Ahmed S. Darwish wrote:
>
> > > +#define read_seqcount_begin(s)	do_read_seqcount_begin(__to_seqcount_t(s))
> > > +
> > > +static inline unsigned do_read_seqcount_begin(const seqcount_t *s)
> > > +{
> > ...
> >
> > Hmm, the __to_seqcount_t(s) force cast is not good. It will break the
> > arguments type-safety of seqcount macros that do not have either:
> >
> >     __associated_lock_is_preemptible() or
> >     __assert_associated_lock_held()
> >
> > in their path. This basically includes all the read path macros, and
> > even some others (e.g. write_seqcount_invalidate()).
> >
> > With the suggested force cast above, I can literally *pass anything* to
> > read_seqcount_begin() and friends, and the compiler won't say a thing.
> >
> > So, I'll restore __to_seqcount_t(s) that to its original implementation:
>
> Right, I figured that the write side would be enough to catch glaring
> abuse. But sure.
>
> It's a bummer we didn't upgrade the minimum compiler version to 4.9,
> that would've given us _Generic(), which allows one to write this
> slightly less verbose I think.
>

Looking at 5429ef62bcf3 ("compiler/gcc: Raise minimum GCC version for
kernel builds to 4.8"), it seems that the decision of picking gcc 4.8
vs. 4.9 was kinda arbitrary.

Anyway, please continue below.

> How about something disguisting like this then?
>
...
> #define __SEQ_RT	IS_BUILTIN(CONFIG_PREEMPT_RT)
>
> SEQCOUNT_LOCKTYPE(raw_spinlock, raw_spinlock_t,	false,		lock)
> SEQCOUNT_LOCKTYPE(spinlock,	spinlock_t,		__SEQ_RT,	lock)
> SEQCOUNT_LOCKTYPE(rwlock,	rwlock_t,		__SEQ_RT,	lock)
> SEQCOUNT_LOCKTYPE(mutex,	struct mutex,		true,		lock)
> SEQCOUNT_LOCKTYPE(ww_mutex,	struct ww_mutex,	true,		lock->base)
>
> #if (defined(CONFIG_CC_IS_GCC) && CONFIG_GCC_VERSION < 40900) || defined(__CHECKER__)
>
> #define __seqprop_pick(const_expr, s, locktype, prop, otherwise)	\
> 	__builtin_choose_expr(const_expr,				\
> 			      __seqprop_##locktype##_##prop((void *)(s)), \
> 			      otherwise)
>
> extern void __seqprop_invalid(void);
>
> #define __seqprop(s, prop)								\
> 	__seqprop_pick(__same_type(*(s), seqcount_t), (s), seqcount, prop,		\
> 	  __seqprop_pick(__same_type(*(s), seqcount_raw_spinlock_t), (s), raw_spinlock, prop, \
> 	    __seqprop_pick(__same_type(*(s), seqcount_spinlock_t), (s), spinlock, prop,	\
> 	      __seqprop_pick(__same_type(*(s), seqcount_rwlock_t), (s), rwlock, prop,	\
> 	        __seqprop_pick(__same_type(*(s), seqcount_mutex_t), (s), mutex, prop,	\
> 	          __seqprop_pick(__same_type(*(s), seqcount_ww_mutex_t), (s), ww_mutex, prop, \
> 		    __seqprop_invalid()))))))
>
> #else
>
> #define __seqprop_case(s, locktype, prop) \
> 	seqcount_##locktype##_t: __seqprop_##locktype##_##prop((void *)s)
>
> #define __seqprop(s, prop)					\
> 	_Generic(*(s),						\
> 		 seqcount_t: __seqprop_seqcount_##prop((void*)s),\
> 		 __seqprop_case((s), raw_spinlock, prop),	\
> 		 __seqprop_case((s), spinlock, prop),		\
> 		 __seqprop_case((s), rwlock, prop),		\
> 		 __seqprop_case((s), mutex, prop),		\
> 		 __seqprop_case((s), ww_mutex, prop))
>
> #endif
>
> #define __to_seqcount_t(s)			__seqprop(s, ptr)
> #define __associated_lock_is_preemptible(s)	__seqprop(s, preempt)
> #define __assert_associated_lock_held(s)	__seqprop(s, assert)

Hmm, I'll prototype the whole thing (along with PREEMPT_RT associated
lock()/unlock() as you've mentioned in the other e-mail), and come back.

Honestly, I have a first impression that this is heading into too much
complexity and compaction, but let's finish the whole thing first.

Thanks,

--
Ahmed S. Darwish
Linutronix GmbH

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ