lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 16 Sep 2008 18:52:54 +0200
From:	Manfred Spraul <manfred@...orfullife.com>
To:	paulmck@...ux.vnet.ibm.com
CC:	linux-kernel@...r.kernel.org, cl@...ux-foundation.org,
	mingo@...e.hu, akpm@...ux-foundation.org, dipankar@...ibm.com,
	josht@...ux.vnet.ibm.com, schamp@....com, niv@...ibm.com,
	dvhltc@...ibm.com, ego@...ibm.com, laijs@...fujitsu.com,
	rostedt@...dmis.org, peterz@...radead.org, penberg@...helsinki.fi,
	andi@...stfloor.org
Subject: Re: [PATCH, RFC] v4 scalable classic RCU implementation

Hi Paul,

Paul E. McKenney wrote:
> +/*
> + * Scan the leaf rcu_node structures, processing dyntick state for any that
> + * have not yet encountered a quiescent state, using the function specified.
> + * Returns 1 if the current grace period ends while scanning (possibly
> + * because we made it end).
> + */
> +static int rcu_process_dyntick(struct rcu_state *rsp, long lastcomp,
> +			       int (*f)(struct rcu_data *))
> +{
> +	unsigned long bit;
> +	int cpu;
> +	unsigned long flags;
> +	unsigned long mask;
> +	struct rcu_node *rnp_cur = rsp->level[NUM_RCU_LVLS - 1];
> +	struct rcu_node *rnp_end = &rsp->node[NUM_RCU_NODES];
> +
> +	for (; rnp_cur < rnp_end; rnp_cur++) {
> +		mask = 0;
> +		spin_lock_irqsave(&rnp_cur->lock, flags);
> +		if (rsp->completed != lastcomp) {
> +			spin_unlock_irqrestore(&rnp_cur->lock, flags);
> +			return 1;
> +		}
> +		if (rnp_cur->qsmask == 0) {
> +			spin_unlock_irqrestore(&rnp_cur->lock, flags);
> +			continue;
> +		}
> +		cpu = rnp_cur->grplo;
> +		bit = 1;
> +		mask = 0;
> +		for (; cpu <= rnp_cur->grphi; cpu++, bit <<= 1) {
> +			if ((rnp_cur->qsmask & bit) != 0 && f(rsp->rda[cpu]))
> +				mask |= bit;
> +		}
>   
I'm still comparing my implementation with your code:
- f is called once for each cpu in the system, correct?
- if at least one cpu is in nohz mode, this loop will be needed for 
every grace period.

That means an O(NR_CPUS) loop with disabled local interrupts :-(
Is that correct?

Unfortunately, my solution is even worse:
My rcu_irq_exit() acquires a global spinlock when called on a nohz cpus.
A few cpus in cpu_idle, nohz, executing 50k network interrupts/sec would 
cacheline-trash that spinlock.
I'm considering counting interrupts: if a nohz cpu executes more than a 
few interrupts/tick, then add a timer that check rcu_pending().

Perhaps even wouldn't be enough: I remember that the initial unhandled 
irq detection code broke miserably on large SGI systems:
An atomic_inc(&global_var) in the local timer interrupt (i.e.: 
NR_CPUS*HZ calls/sec) caused so severe trashing that the system wouldn't 
boot. IIRC that was with 512 cpus.


--
    Manfred
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ