[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200818171330.GH27891@paulmck-ThinkPad-P72>
Date: Tue, 18 Aug 2020 10:13:30 -0700
From: "Paul E. McKenney" <paulmck@...nel.org>
To: Thomas Gleixner <tglx@...utronix.de>
Cc: Michal Hocko <mhocko@...e.com>,
Uladzislau Rezki <urezki@...il.com>,
Peter Zijlstra <peterz@...radead.org>,
LKML <linux-kernel@...r.kernel.org>, RCU <rcu@...r.kernel.org>,
linux-mm@...ck.org, Andrew Morton <akpm@...ux-foundation.org>,
Vlastimil Babka <vbabka@...e.cz>,
Matthew Wilcox <willy@...radead.org>,
"Theodore Y . Ts'o" <tytso@....edu>,
Joel Fernandes <joel@...lfernandes.org>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Oleksiy Avramchenko <oleksiy.avramchenko@...ymobile.com>
Subject: Re: [RFC-PATCH 1/2] mm: Add __GFP_NO_LOCKS flag
On Tue, Aug 18, 2020 at 06:55:11PM +0200, Thomas Gleixner wrote:
> On Tue, Aug 18 2020 at 09:13, Paul E. McKenney wrote:
> > On Tue, Aug 18, 2020 at 04:43:14PM +0200, Thomas Gleixner wrote:
> >> On Tue, Aug 18 2020 at 06:53, Paul E. McKenney wrote:
> >> > On Tue, Aug 18, 2020 at 09:43:44AM +0200, Michal Hocko wrote:
> >> >> Thomas had a good point that it doesn't really make much sense to
> >> >> optimize for flooders because that just makes them more effective.
> >> >
> >> > The point is not to make the flooders go faster, but rather for the
> >> > system to be robust in the face of flooders. Robust as in harder for
> >> > a flooder to OOM the system.
> >> >
> >> > And reducing the number of post-grace-period cache misses makes it
> >> > easier for the callback-invocation-time memory freeing to keep up with
> >> > the flooder, thus avoiding (or at least delaying) the OOM.
> >>
> >> Throttling the flooder is incresing robustness far more than reducing
> >> cache misses.
> >
> > True, but it takes time to identify a flooding event that needs to be
> > throttled (as in milliseconds). This time cannot be made up.
>
> Not really. A flooding event will deplete your preallocated pages very
> fast, so you have to go into the allocator and get new ones which
> naturally throttles the offender.
Should it turn out that we can in fact go into the allocator, completely
agreed.
> So if your open/close thing uses the new single argument free which has
> to be called from sleepable context then the allocation either gives you
> a page or that thing has to wait. No fancy extras.
In the single-argument kvfree_rcu() case, completely agreed.
> You still can have a page reserved for your other regular things and
> once that it gone, you have to fall back to the linked list for
> those. But when that happens the extra cache misses are not your main
> problem anymore.
The extra cache misses are a problem in that case because they throttle
the reclamation, which anti-throttles the producer, especially in the
case where callback invocation is offloaded.
Thanx, Paul
Powered by blists - more mailing lists