lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 12 Jun 2013 14:15:32 +0200
From:	Peter Zijlstra <peterz@...radead.org>
To:	Kirill Tkhai <tkhai@...dex.ru>
Cc:	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	Steven Rostedt <rostedt@...dmis.org>,
	Ingo Molnar <mingo@...hat.com>, tglx@...utronix.de
Subject: Re: [PATCH] spin_unlock*_no_resched()

On Wed, Jun 12, 2013 at 04:06:47PM +0400, Kirill Tkhai wrote:
> There are many constructions like:
> 
> 	spin_unlock_irq(lock);
> 	schedule();
> 
> In case of preemptible kernel we check if task needs reschedule
> at the end of spin_unlock(). So if TIF_NEED_RESCHED is set
> we call schedule() twice and we have a little overhead here.
> Add primitives to avoid these situations.
> 
> Signed-off-by: Kirill Tkhai <tkhai@...dex.ru>
> CC: Steven Rostedt <rostedt@...dmis.org>
> CC: Ingo Molnar <mingo@...hat.com>
> CC: Peter Zijlstra <peterz@...radead.org>
> ---
>  include/linux/spinlock.h         |   27 +++++++++++++++++++++++++++
>  include/linux/spinlock_api_smp.h |   37 +++++++++++++++++++++++++++++++++++++
>  include/linux/spinlock_api_up.h  |   13 +++++++++++++
>  kernel/spinlock.c                |   20 ++++++++++++++++++++
>  4 files changed, 97 insertions(+), 0 deletions(-)
> diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h
> index 7d537ce..35caa32 100644
> --- a/include/linux/spinlock.h
> +++ b/include/linux/spinlock.h
> @@ -221,13 +221,24 @@ static inline void do_raw_spin_unlock(raw_spinlock_t *lock) __releases(lock)
>  #define raw_spin_lock_irq(lock)		_raw_spin_lock_irq(lock)
>  #define raw_spin_lock_bh(lock)		_raw_spin_lock_bh(lock)
>  #define raw_spin_unlock(lock)		_raw_spin_unlock(lock)
> +#define raw_spin_unlock_no_resched(lock)	\
> +	_raw_spin_unlock_no_resched(lock)
> +
>  #define raw_spin_unlock_irq(lock)	_raw_spin_unlock_irq(lock)
> +#define raw_spin_unlock_irq_no_resched(lock)	\
> +	_raw_spin_unlock_irq_no_resched(lock)
>  
>  #define raw_spin_unlock_irqrestore(lock, flags)		\
>  	do {							\
>  		typecheck(unsigned long, flags);		\
>  		_raw_spin_unlock_irqrestore(lock, flags);	\
>  	} while (0)
> +#define raw_spin_unlock_irqrestore_no_resched(lock, flags)	\
> +	do {							\
> +		typecheck(unsigned long, flags);		\
> +		_raw_spin_unlock_irqrestore_no_resched(lock, flags);	\
> +	} while (0)

So I absolutely hate this API because people can (and invariably will)
abuse it; much like they did/do preempt_enable_no_resched().

IIRC Thomas even maps preempt_enable_no_resched() to preempt_enable() in
-rt to make sure we don't miss preemption points due to stupidity.

He converted the 'few' sane sites to use schedule_preempt_disabled(). In
that vein, does it make sense to introduce schedule_spin_locked()?

Also, your patch 'fails' to make use of the new API.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ