lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 11 May 2015 17:51:25 +0200
From:	Peter Zijlstra <peterz@...radead.org>
To:	Frederic Weisbecker <fweisbec@...il.com>
Cc:	LKML <linux-kernel@...r.kernel.org>,
	Ingo Molnar <mingo@...nel.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: [PATCH 4/7] preempt: Disable preemption from preempt_schedule*()
 callers

On Mon, May 11, 2015 at 05:08:21PM +0200, Frederic Weisbecker wrote:
> Lets gather the preempt operations (set PREEMPT_ACTIVE and disable
> preemption) in a single operation. This way we prepare to remove the
> preemption disablement in __schedule() in order to omptimize this
> duty on the caller.
> 
> Suggested-by: Linus Torvalds <torvalds@...ux-foundation.org>
> Cc: Ingo Molnar <mingo@...nel.org>
> Cc: Peter Zijlstra <peterz@...radead.org>
> Cc: Thomas Gleixner <tglx@...utronix.de>
> Cc: Linus Torvalds <torvalds@...ux-foundation.org>
> Signed-off-by: Frederic Weisbecker <fweisbec@...il.com>
> ---
>  include/linux/preempt.h | 12 ++++++++++++
>  kernel/sched/core.c     | 20 ++++++--------------
>  2 files changed, 18 insertions(+), 14 deletions(-)
> 
> diff --git a/include/linux/preempt.h b/include/linux/preempt.h
> index 4689ef2..45da394 100644
> --- a/include/linux/preempt.h
> +++ b/include/linux/preempt.h
> @@ -137,6 +137,18 @@ extern void preempt_count_sub(int val);
>  #define preempt_count_inc() preempt_count_add(1)
>  #define preempt_count_dec() preempt_count_sub(1)
>  
> +#define preempt_active_enter() \
> +do { \
> +	preempt_count_add(PREEMPT_ACTIVE + PREEMPT_DISABLE_OFFSET); \
> +	barrier(); \
> +} while (0)
> +
> +#define preempt_active_exit() \
> +do { \
> +	barrier(); \
> +	preempt_count_sub(PREEMPT_ACTIVE + PREEMPT_DISABLE_OFFSET); \
> +} while (0)
> +
>  #ifdef CONFIG_PREEMPT_COUNT
>  
>  #define preempt_disable() \
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 8027cfd..182127a 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -2859,15 +2859,14 @@ void __sched schedule_preempt_disabled(void)
>  static void __sched notrace preempt_schedule_common(void)
>  {
>  	do {
> -		__preempt_count_add(PREEMPT_ACTIVE);
> +		preempt_active_enter();
>  		__schedule();
> -		__preempt_count_sub(PREEMPT_ACTIVE);
> +		preempt_active_exit();
>  
>  		/*
>  		 * Check again in case we missed a preemption opportunity
>  		 * between schedule and now.
>  		 */
> -		barrier();
>  	} while (need_resched());
>  }

So this patch adds a level of preempt_disable; I suspect the goal is to
remove the preempt_disable() inside __schedule(), but as it stands this
patch is broken, no?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ