lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 1 May 2012 09:43:43 -0700
From:	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To:	Peter Zijlstra <peterz@...radead.org>
Cc:	Avi Kivity <avi@...hat.com>,
	"Nikunj A. Dadhania" <nikunj@...ux.vnet.ibm.com>, mingo@...e.hu,
	jeremy@...p.org, mtosatti@...hat.com, kvm@...r.kernel.org,
	x86@...nel.org, vatsa@...ux.vnet.ibm.com,
	linux-kernel@...r.kernel.org, hpa@...or.com
Subject: Re: [RFC PATCH v1 3/5] KVM: Add paravirt kvm_flush_tlb_others

On Tue, May 01, 2012 at 06:16:46PM +0200, Peter Zijlstra wrote:
> On Tue, 2012-05-01 at 18:36 +0300, Avi Kivity wrote:
> 
> > > > What bounds the amount of memory waiting to be freed during an rcu grace
> > > > period?
> > >
> > > Most RCU implementations don't have limits, so that could be quite a
> > > lot. I think preemptible RCU has a batch limit at which point it tries
> > > rather hard to force a grace period, but I'm not sure if even that
> > > provides a hard limit.

All the TREE_RCU variants will get more aggressive about forcing grace
periods if any given CPU has more than 10,000 callbacks posted.  When this
happens, the call_rcu() variants will try to push things ahead.

> > > Practically though, I haven't had reports of PPC/Sparc going funny
> > > because of this.
> > 
> > It could be considered a DoS if a user is able to free page tables
> > faster than rcu is able to recycle them, possibly triggering the oom
> > killer (should that force a grace period before firing from the hip?)
> 
> One would think that would be a good thing, yes. However I cannot seem
> to find anything like that in the current OOM killer. David, Paul, I
> seem to have vague recollections of a discussion about RCU vs OOM, what
> was the resolution (if anything) and would something like the below make
> sense?
> 
> ---
>  mm/oom_kill.c |    3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/mm/oom_kill.c b/mm/oom_kill.c
> index 46bf2ed5..244a371 100644
> --- a/mm/oom_kill.c
> +++ b/mm/oom_kill.c
> @@ -607,6 +607,9 @@ int try_set_zonelist_oom(struct zonelist *zonelist, gfp_t gfp_mask)
>  	struct zone *zone;
>  	int ret = 1;
>  
> +	synchronize_sched();
> +	synchronize_rcu();

This will wait for a grace period, but not for the callbacks, which are
the things that actually free the memory.  Given that, should we instead
do something like:

	rcu_barrier();

Note that rcu_barrier() and rcu_barrier_sched() are one and the same
for CONFIG_PREEMPT=n kernels, and there seems to be a lot more
call_rcu() than call_rcu_sched(), so I left out the rcu_barrier_sched().

That said, this does have the effect of delaying the startup of the OOM
killer, and it does nothing to tell RCU that accelerating grace periods
would be a good thing.  If DoS attack is a theoretical possibility rather
than a real bug, is a pure wait on RCU the right approach.

Alternative approaches include:

1.	OOM killer calls into RCU, which arranges to become more
	aggressive about forcing grace periods.  (For example, RCU
	could set a flag that caused it to act as if there were
	lots of callbacks posted.)

2.	RCU provides an API that forces grace periods, perhaps
	invoked from a separate kthread so that the OOM killer can
	proceed in parallel with RCU's grace-period forcing.

3.	Like #2, but invoke it a bit earlier than the OOM killer
	would normally start running.

						Thanx, Paul

>  	spin_lock(&zone_scan_lock);
>  	for_each_zone_zonelist(zone, z, zonelist, gfp_zone(gfp_mask)) {
>  		if (zone_is_oom_locked(zone)) {
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ