[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20181031092942.GJ744@hirez.programming.kicks-ass.net>
Date: Wed, 31 Oct 2018 10:29:42 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Matthew Wilcox <willy@...radead.org>
Cc: Andy Lutomirski <luto@...capital.net>, nadav.amit@...il.com,
Kees Cook <keescook@...omium.org>,
Igor Stoppa <igor.stoppa@...il.com>,
Mimi Zohar <zohar@...ux.vnet.ibm.com>,
Dave Chinner <david@...morbit.com>,
James Morris <jmorris@...ei.org>,
Michal Hocko <mhocko@...nel.org>,
Kernel Hardening <kernel-hardening@...ts.openwall.com>,
linux-integrity <linux-integrity@...r.kernel.org>,
linux-security-module <linux-security-module@...r.kernel.org>,
Igor Stoppa <igor.stoppa@...wei.com>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Jonathan Corbet <corbet@....net>,
Laura Abbott <labbott@...hat.com>,
Randy Dunlap <rdunlap@...radead.org>,
Mike Rapoport <rppt@...ux.vnet.ibm.com>,
"open list:DOCUMENTATION" <linux-doc@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>,
Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [PATCH 10/17] prmem: documentation
On Tue, Oct 30, 2018 at 02:25:51PM -0700, Matthew Wilcox wrote:
> On Tue, Oct 30, 2018 at 11:51:17AM -0700, Andy Lutomirski wrote:
> > Finally, one issue: rare_alloc() is going to utterly suck
> > performance-wise due to the global IPI when the region gets zapped out
> > of the direct map or otherwise made RO. This is the same issue that
> > makes all existing XPO efforts so painful. We need to either optimize
> > the crap out of it somehow or we need to make sure it’s not called
> > except during rare events like device enumeration.
>
> Batching operations is kind of the whole point of the VM ;-) Either
> this rare memory gets used a lot, in which case we'll want to create slab
> caches for it, make it a MM zone and the whole nine yeards, or it's not
> used very much in which case it doesn't matter that performance sucks.
Yes, for the dynamic case something along those lines would be needed.
If we have a single rare zone, we could even have __GFP_RARE or whatever
that manages this.
The page allocator would have to grow a rare memblock type, and every
rare alloc would allocate from a rare memblock, when none is available,
creation of a rare block would set up the mappings etc..
> For now, I'd suggest allocating 2MB chunks as needed, and having a
> shrinker to hand back any unused pieces.
Something like the percpu allocator?
Powered by blists - more mailing lists