[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200926143702.GI29330@paulmck-ThinkPad-P72>
Date: Sat, 26 Sep 2020 07:37:02 -0700
From: "Paul E. McKenney" <paulmck@...nel.org>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Uladzislau Rezki <urezki@...il.com>,
Mel Gorman <mgorman@...hsingularity.net>,
Michal Hocko <mhocko@...e.com>,
LKML <linux-kernel@...r.kernel.org>, RCU <rcu@...r.kernel.org>,
linux-mm@...ck.org, Andrew Morton <akpm@...ux-foundation.org>,
Vlastimil Babka <vbabka@...e.cz>,
Thomas Gleixner <tglx@...utronix.de>,
"Theodore Y . Ts'o" <tytso@....edu>,
Joel Fernandes <joel@...lfernandes.org>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Oleksiy Avramchenko <oleksiy.avramchenko@...ymobile.com>,
Mel Gorman <mgorman@...e.de>
Subject: Re: [RFC-PATCH 2/4] mm: Add __rcu_alloc_page_lockless() func.
On Fri, Sep 25, 2020 at 10:26:18AM +0200, Peter Zijlstra wrote:
> On Thu, Sep 24, 2020 at 08:38:34AM -0700, Paul E. McKenney wrote:
> > On Thu, Sep 24, 2020 at 01:19:07PM +0200, Peter Zijlstra wrote:
> > > On Thu, Sep 24, 2020 at 10:16:14AM +0200, Uladzislau Rezki wrote:
> > > > The key point is "enough". We need pages to make a) fast progress b) support
> > > > single argument of kvfree_rcu(one_arg). Not vice versa. That "enough" depends
> > > > on scheduler latency and vague pre-allocated number of pages, it might
> > > > be not enough what would require to refill it more and more or we can overshoot
> > > > that would lead to memory overhead. So we have here timing issues and
> > > > not accurate model. IMHO.
> > >
> > > I'm firmly opposed to the single argument kvfree_rcu() idea, that's
> > > requiring memory to free memory.
> >
> > Not quite.
> >
> > First, there is a fallback when memory allocation fails. Second,
> > in heavy-use situations, there is only one allocation per about
> > 500 kvfree_rcu() calls on 64-bit systems. Third, there are other
> > long-standing situations that require allocating memory in order to
> > free memory.
>
> Some of which are quite broken. And yes, I'm aware of all that, I'm the
> one that started swap-over-NFS, which requires network traffic to free
> memory, which is one insane step further.
I could easily imagine that experience might have left some scars.
> But the way to make that 'work' is carefully account and pre-allocate
> (or size the reserve) the required memory to make progress and to
> strictly limit concurrency to ensure you stay in your bounds.
But your situation is different. When swapping over NFS, if you
cannot allocate the memory to do the I/O, you cannot free the memory
you are attempting to swap out, at least not unless you can kill the
corresponding process. So if you don't want to kill processes, as you
say, worst case is what matters.
The kvfree_rcu() situation is rather different. In all cases, there
is a fallback, namely using the existing rcu_head for double-argument
kvfree_rcu() or falling back to synchronize_rcu() for single-argument
kvfree_rcu(). As long as these fallbacks are sufficiently rare, the
system will probably survive.
> > So I agree that it is a good general rule of thumb to avoid allocating
> > on free paths, but there are exceptions. This is one of them.
>
> The very first thing you need to do is proof your memory usage is
> bounded, and then calculate your bound.
Again, you are confusing your old swap-over-NFS scars with the current
situation. They really are not the same.
> The problem is that with RCU you can't limit concurrency. call_rcu()
> can't block, you can't wait for a grace period to end when you've ran
> out of your reserve.
>
> That is, you don't have a bound, so no reserve what so ever is going to
> help.
Almost. A dedicated reserve large enough to result in sufficiently low
use of the fallback paths is too large. Again, we can tolerate a small
fraction of requests taking the fallback, with emphasis on "small".
> You must have that callback_head fallback.
And we do have that callback_head fallback. And in the case of
single-argument kvfree_rcu(), that synchronize_rcu() fallback. And as
long as we can avoid using those fallbacks almost all the time, things
will be OK. But we do need to able to allocate memory in the common
case when there is memory to be had.
Thanx, Paul
Powered by blists - more mailing lists