lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sat, 30 Sep 2006 19:22:26 +0530
From:	Dipankar Sarma <dipankar@...ibm.com>
To:	Ingo Molnar <mingo@...e.hu>
Cc:	Andrew Morton <akpm@...l.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	"Paul E. McKenney" <paulmck@...ibm.com>,
	LKML <linux-kernel@...r.kernel.org>, Jim Gettys <jg@...top.org>,
	John Stultz <johnstul@...ibm.com>,
	David Woodhouse <dwmw2@...radead.org>,
	Arjan van de Ven <arjan@...radead.org>,
	Dave Jones <davej@...hat.com>
Subject: Re: [patch 08/23] dynticks: prepare the RCU code

On Sat, Sep 30, 2006 at 03:09:58PM +0200, Ingo Molnar wrote:
> * Dipankar Sarma <dipankar@...ibm.com> wrote:
> 
> > It is duplicating code. That can be easily fixed, but we need to 
> > figure out what we really want from RCU when we are about to switch 
> > off the ticks. It is hard if you want to finish off all the pending 
> > RCUs and go to nohz state. Can you live with backing out if there are 
> > pending RCUs ?
> 
> the thing is that when we go idle we /want/ to process whatever delayed 
> work there might be - rate limited or not. Do you agree with that 
> approach? I consider this a performance feature as well: this way we can 
> utilize otherwise lost idle time. It is not a problem that we dont 
> 'batch' this processing: we are really idle and we've got free cycles to 
> burn. We could even do an RCU processing loop that immediately breaks 
> out if need_resched() gets set [by an IRQ or by another CPU].

If you don't care about rate limiting RCU processing (you wouldn't
in CONFIG_PREEMPT_RT), you still have to deal with the situation
that one CPU going idle doesn't guarantee that you can process
all pending RCUs. You can process the finished ones, but what
about the ones that are still waiting for the grace period
beyond the current cpu ?

> 
> secondly, i think i saw functionality problems when RCU was not 
> completed before going idle - for example synchronize_rcu() on another 
> CPU would hang.

That is probably because of what I mention above. In the original
CONFIG_NO_IDLE_HZ, we don't go into a nohz state if there are
RCUs pending in that cpu.

> 
> what approach would you suggest to achieve these goals?

There is no way to finish all the RCUs in a given cpu
unless you are prepared to wait for a grace period or so. 
So, you go to idle, keep checking in the timer tick and as soon 
as all RCUs are done, go to nohz state. You can keep
processing RCUs in every idle tick so that if you have only
finished RCU callbacks on that cpu, you can go to nohz rightaway.
I can add that API.

Of course, you can do what cpu hotplug does - move the RCUs
to another CPU. But that is an expensive operaation.

Thanks
Dipankar
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ