lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 30 Sep 2015 05:32:08 -0400
From:	Steven Rostedt <rostedt@...dmis.org>
To:	Peter Zijlstra <peterz@...radead.org>
Cc:	mingo@...nel.org, linux-kernel@...r.kernel.org,
	torvalds@...ux-foundation.org, fweisbec@...il.com, oleg@...hat.com,
	umgwanakikbuti@...il.com, tglx@...utronix.de
Subject: Re: [PATCH v2 03/12] sched: Create preempt_count invariant

On Wed, 30 Sep 2015 09:10:38 +0200
Peter Zijlstra <peterz@...radead.org> wrote:

> Assuming units of PREEMPT_DISABLE_OFFSET for preempt_count() numbers.
> 
> Now that TASK_DEAD no longer results in preempt_count() == 3 during
> scheduling, we will always call context_switch() with preempt_count()
> == 2.
> 
> However, we don't always end up with preempt_count() == 2 in
> finish_task_switch() because new tasks get created with
> preempt_count() == 1.
> 
> Create FORK_PREEMPT_COUNT and set it to 2 and use that in the right
> places. Note that we cannot use INIT_PREEMPT_COUNT as that serves
> another purpose (boot).
> 
> After this, preempt_count() is invariant across the context switch,
> with exception of PREEMPT_ACTIVE.
> 
> Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
> ---
>  arch/x86/include/asm/preempt.h |    2 +-
>  include/asm-generic/preempt.h  |    2 +-
>  include/linux/sched.h          |   17 ++++++++++++-----
>  kernel/sched/core.c            |   23 +++++++++++++++++++++--
>  4 files changed, 35 insertions(+), 9 deletions(-)
> 
> --- a/arch/x86/include/asm/preempt.h
> +++ b/arch/x86/include/asm/preempt.h
> @@ -31,7 +31,7 @@ static __always_inline void preempt_coun
>   * must be macros to avoid header recursion hell
>   */
>  #define init_task_preempt_count(p) do { \
> -	task_thread_info(p)->saved_preempt_count = PREEMPT_DISABLED; \
> +	task_thread_info(p)->saved_preempt_count = FORK_PREEMPT_COUNT; \
>  } while (0)
>  
>  #define init_idle_preempt_count(p, cpu) do { \
> --- a/include/asm-generic/preempt.h
> +++ b/include/asm-generic/preempt.h
> @@ -24,7 +24,7 @@ static __always_inline void preempt_coun
>   * must be macros to avoid header recursion hell
>   */
>  #define init_task_preempt_count(p) do { \
> -	task_thread_info(p)->preempt_count = PREEMPT_DISABLED; \
> +	task_thread_info(p)->preempt_count = FORK_PREEMPT_COUNT; \
>  } while (0)
>  
>  #define init_idle_preempt_count(p, cpu) do { \
> --- a/include/linux/sched.h
> +++ b/include/linux/sched.h
> @@ -599,11 +599,7 @@ struct task_cputime_atomic {
>  		.sum_exec_runtime = ATOMIC64_INIT(0),		\
>  	}
>  
> -#ifdef CONFIG_PREEMPT_COUNT
> -#define PREEMPT_DISABLED	(1 + PREEMPT_ENABLED)
> -#else
> -#define PREEMPT_DISABLED	PREEMPT_ENABLED
> -#endif
> +#define PREEMPT_DISABLED	(PREEMPT_DISABLE_OFFSET + PREEMPT_ENABLED)

Hmm, it looks to me that you removed all users of PREEMPT_DISABLED.

Did you add another user of it?

-- Steve

>  
>  /*
>   * Disable preemption until the scheduler is running -- use an unconditional
> @@ -613,6 +609,17 @@ struct task_cputime_atomic {
>   */
>  #define INIT_PREEMPT_COUNT	PREEMPT_OFFSET
>  

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ