lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100310005401.GH5058@nowhere>
Date:	Wed, 10 Mar 2010 01:54:04 +0100
From:	Frederic Weisbecker <fweisbec@...il.com>
To:	Lai Jiangshan <laijs@...fujitsu.com>
Cc:	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
	Ingo Molnar <mingo@...e.hu>,
	Peter Zijlstra <peterz@...radead.org>,
	Steven Rostedt <rostedt@...dmis.org>,
	Mathieu Desnoyers <mathieu.desnoyers@...ymtl.ca>,
	josh@...htriplett.org, LKML <linux-kernel@...r.kernel.org>
Subject: Re: [RFC PATCH] rcu: don't ignore preempt_disable() in the idle
	loop

On Tue, Mar 09, 2010 at 07:13:27PM +0800, Lai Jiangshan wrote:
> 
> Current, synchronize_sched() ignores preempt-disable()
> sequences in the idle loop. It makes synchronize_sched()
> is not so pure, and it hurts tracing.
> 
> Paul have a proposal before:
> http://lkml.org/lkml/2009/4/5/140
> http://lkml.org/lkml/2009/4/6/496
> But old fix needs to hack into all architectures' idle loops.
> 
> This is another try, it uses the fact that idle loops
> are executing with preept_count()=1.
> But I didn't look deep into all idle loops.
> 
> Signed-off-by: Lai Jiangshan <laijs@...fujitsu.com>
> ---
> diff --git a/kernel/rcutree.c b/kernel/rcutree.c
> index 3ec8160..0761723 100644
> --- a/kernel/rcutree.c
> +++ b/kernel/rcutree.c
> @@ -80,6 +80,10 @@ DEFINE_PER_CPU(struct rcu_data, rcu_sched_data);
>  struct rcu_state rcu_bh_state = RCU_STATE_INITIALIZER(rcu_bh_state);
>  DEFINE_PER_CPU(struct rcu_data, rcu_bh_data);
>  
> +#ifndef IDLE_CORE_LOOP_PREEMPT_COUNT
> +#define IDLE_CORE_LOOP_PREEMPT_COUNT (1)
> +#endif
> +
>  /*
>   * Return true if an RCU grace period is in progress.  The ACCESS_ONCE()s
>   * permit this function to be invoked without holding the root rcu_node
> @@ -1114,6 +1118,26 @@ static void rcu_do_batch(struct rcu_state *rsp, struct rcu_data *rdp)
>  		raise_softirq(RCU_SOFTIRQ);
>  }
>  
> +static inline int rcu_idle_qs(int cpu)
> +{
> +	if (!idle_cpu(cpu))
> +		return 0;
> +
> +	if (!rcu_scheduler_active)
> +		return 0;
> +
> +	if (in_softirq())
> +		return 0;
> +
> +	if (hardirq_count() > (1 << HARDIRQ_SHIFT))
> +		return 0;
> +
> +	if ((preempt_count() & PREEMPT_MASK) > IDLE_CORE_LOOP_PREEMPT_COUNT)
> +		return 0;



This is neat. But I wonder about something. It means that in most of
the idle loop, we won't be able to schedule the rcu callbacks
So if I understand well, this is going to needlessly delay
rcu callbacks for no strong reason most of the time. I'm not sure
if this is going to have any bad impact, but if so may be
do we want this as a feature, and check we currently have
running users of this feature (I only have function tracers in mind,
may be I'm missing some others). Altough I wonder how racy such
a check could be. At least we can make it a CONFIG.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ