[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200930102745.GC3226834@linux.ibm.com>
Date: Wed, 30 Sep 2020 13:27:45 +0300
From: Mike Rapoport <rppt@...ux.ibm.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Mike Rapoport <rppt@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Alexander Viro <viro@...iv.linux.org.uk>,
Andy Lutomirski <luto@...nel.org>,
Arnd Bergmann <arnd@...db.de>, Borislav Petkov <bp@...en8.de>,
Catalin Marinas <catalin.marinas@....com>,
Christopher Lameter <cl@...ux.com>,
Dan Williams <dan.j.williams@...el.com>,
Dave Hansen <dave.hansen@...ux.intel.com>,
David Hildenbrand <david@...hat.com>,
Elena Reshetova <elena.reshetova@...el.com>,
"H. Peter Anvin" <hpa@...or.com>, Idan Yaniv <idan.yaniv@....com>,
Ingo Molnar <mingo@...hat.com>,
James Bottomley <jejb@...ux.ibm.com>,
"Kirill A. Shutemov" <kirill@...temov.name>,
Matthew Wilcox <willy@...radead.org>,
Mark Rutland <mark.rutland@....com>,
Michael Kerrisk <mtk.manpages@...il.com>,
Palmer Dabbelt <palmer@...belt.com>,
Paul Walmsley <paul.walmsley@...ive.com>,
Thomas Gleixner <tglx@...utronix.de>,
Shuah Khan <shuah@...nel.org>, Tycho Andersen <tycho@...ho.ws>,
Will Deacon <will@...nel.org>, linux-api@...r.kernel.org,
linux-arch@...r.kernel.org, linux-arm-kernel@...ts.infradead.org,
linux-fsdevel@...r.kernel.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, linux-kselftest@...r.kernel.org,
linux-nvdimm@...ts.01.org, linux-riscv@...ts.infradead.org,
x86@...nel.org
Subject: Re: [PATCH v6 5/6] mm: secretmem: use PMD-size pages to amortize
direct map fragmentation
On Tue, Sep 29, 2020 at 05:15:52PM +0200, Peter Zijlstra wrote:
> On Tue, Sep 29, 2020 at 05:58:13PM +0300, Mike Rapoport wrote:
> > On Tue, Sep 29, 2020 at 04:12:16PM +0200, Peter Zijlstra wrote:
>
> > > It will drop them down to 4k pages. Given enough inodes, and allocating
> > > only a single sekrit page per pmd, we'll shatter the directmap into 4k.
> >
> > Why? Secretmem allocates PMD-size page per inode and uses it as a pool
> > of 4K pages for that inode. This way it ensures that
> > __kernel_map_pages() is always called on PMD boundaries.
>
> Oh, you unmap the 2m page upfront? I read it like you did the unmap at
> the sekrit page alloc, not the pool alloc side of things.
>
> Then yes, but then you're wasting gobs of memory. Basically you can pin
> 2M per inode while only accounting a single page.
Right, quite like THP :)
I considered using a global pool of 2M pages for secretmem and handing
4K pages to each inode from that global pool. But I've decided to waste
memory in favor of simplicity.
The prevoius version of this set included additional patch that allowed
reserving chunk of the physical memory for a global secretmem pool at
boot time. We didn't reach an agreement with David H. about whether this
pool should be allocated directly from memblock or from CMA and I've
dropped the boot time reservation patch because it can always be added on
top.
--
Sincerely yours,
Mike.
Powered by blists - more mailing lists