lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150930050421.0268a360@gandalf.local.home>
Date:	Wed, 30 Sep 2015 05:04:21 -0400
From:	Steven Rostedt <rostedt@...dmis.org>
To:	Peter Zijlstra <peterz@...radead.org>
Cc:	mingo@...nel.org, linux-kernel@...r.kernel.org,
	torvalds@...ux-foundation.org, fweisbec@...il.com, oleg@...hat.com,
	umgwanakikbuti@...il.com, tglx@...utronix.de
Subject: Re: [PATCH v2 01/12] sched: Simplify INIT_PREEMPT_COUNT

On Wed, 30 Sep 2015 09:10:36 +0200
Peter Zijlstra <peterz@...radead.org> wrote:

> As per commit d86ee4809d03 ("sched: optimize cond_resched()") we need
> PREEMPT_ACTIVE to avoid cond_resched() from working before the
> scheduler is setup.
> 
> However, keeping preemption disabled should do the same thing already,
> making the PREEMPT_ACTIVE part entirely redundant.
> 
> The only complication is !PREEMPT_COUNT kernels, where
> PREEMPT_DISABLED ends up being 0. Instead we use an unconditional
> PREEMPT_OFFSET to set preempt_count() even on !PREEMPT_COUNT kernels.
> 
> Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
> ---
>  include/linux/sched.h |   11 +++++------
>  1 file changed, 5 insertions(+), 6 deletions(-)
> 
> --- a/include/linux/sched.h
> +++ b/include/linux/sched.h
> @@ -606,19 +606,18 @@ struct task_cputime_atomic {
>  #endif
>  
>  /*
> - * Disable preemption until the scheduler is running.
> - * Reset by start_kernel()->sched_init()->init_idle().
> + * Disable preemption until the scheduler is running -- use an unconditional
> + * value so that it also works on !PREEMPT_COUNT kernels.
>   *
> - * We include PREEMPT_ACTIVE to avoid cond_resched() from working
> - * before the scheduler is active -- see should_resched().
> + * Reset by start_kernel()->sched_init()->init_idle().

 Reset by start_kernel()->sched_init()->init_idle()->init_idle_preempt_count().

Other than that.

Reviewed-by: Steven Rostedt <rostedt@...dmis.org>

-- Steve

>   */
> -#define INIT_PREEMPT_COUNT	(PREEMPT_DISABLED + PREEMPT_ACTIVE)
> +#define INIT_PREEMPT_COUNT	PREEMPT_OFFSET
>  
>  /**
>   * struct thread_group_cputimer - thread group interval timer counts
>   * @cputime_atomic:	atomic thread group interval timers.
>   * @running:		non-zero when there are timers running and
> - * 			@cputime receives updates.
> + *			@cputime receives updates.
>   *
>   * This structure contains the version of task_cputime, above, that is
>   * used for thread group CPU timer calculations.
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ