[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200713153234.GC707159@kernel.org>
Date: Mon, 13 Jul 2020 18:32:34 +0300
From: Mike Rapoport <rppt@...nel.org>
To: "Kirill A. Shutemov" <kirill@...temov.name>
Cc: linux-kernel@...r.kernel.org, Alan Cox <alan@...ux.intel.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Andy Lutomirski <luto@...nel.org>,
Christopher Lameter <cl@...ux.com>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Idan Yaniv <idan.yaniv@....com>,
James Bottomley <jejb@...ux.ibm.com>,
Matthew Wilcox <willy@...radead.org>,
Peter Zijlstra <peterz@...radead.org>,
"Reshetova, Elena" <elena.reshetova@...el.com>,
Thomas Gleixner <tglx@...utronix.de>,
Tycho Andersen <tycho@...ho.ws>, linux-api@...r.kernel.org,
linux-mm@...ck.org, Mike Rapoport <rppt@...ux.ibm.com>
Subject: Re: [RFC PATCH v2 4/5] mm: secretmem: use PMD-size pages to amortize
direct map fragmentation
On Mon, Jul 13, 2020 at 02:05:05PM +0300, Kirill A. Shutemov wrote:
> On Mon, Jul 06, 2020 at 08:20:50PM +0300, Mike Rapoport wrote:
> > From: Mike Rapoport <rppt@...ux.ibm.com>
> >
> > Removing a PAGE_SIZE page from the direct map every time such page is
> > allocated for a secret memory mapping will cause severe fragmentation of
> > the direct map. This fragmentation can be reduced by using PMD-size pages
> > as a pool for small pages for secret memory mappings.
> >
> > Add a gen_pool per secretmem inode and lazily populate this pool with
> > PMD-size pages.
> >
> > Signed-off-by: Mike Rapoport <rppt@...ux.ibm.com>
> > ---
> > mm/secretmem.c | 107 ++++++++++++++++++++++++++++++++++++++++---------
> > 1 file changed, 88 insertions(+), 19 deletions(-)
> >
> > diff --git a/mm/secretmem.c b/mm/secretmem.c
> > index df8f8c958cc2..c6fcf6d76951 100644
> > --- a/mm/secretmem.c
> > +++ b/mm/secretmem.c
> > @@ -5,6 +5,7 @@
> > #include <linux/memfd.h>
> > #include <linux/printk.h>
> > #include <linux/pagemap.h>
> > +#include <linux/genalloc.h>
> > #include <linux/pseudo_fs.h>
> > #include <linux/set_memory.h>
> > #include <linux/sched/signal.h>
> > @@ -23,24 +24,66 @@
> > #define SECRETMEM_UNCACHED 0x2
> >
> > struct secretmem_ctx {
> > + struct gen_pool *pool;
> > unsigned int mode;
> > };
> >
> > -static struct page *secretmem_alloc_page(gfp_t gfp)
> > +static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
> > {
> > - /*
> > - * FIXME: use a cache of large pages to reduce the direct map
> > - * fragmentation
> > - */
> > - return alloc_page(gfp);
> > + unsigned long nr_pages = (1 << HPAGE_PMD_ORDER);
> > + struct gen_pool *pool = ctx->pool;
> > + unsigned long addr;
> > + struct page *page;
> > + int err;
> > +
> > + page = alloc_pages(gfp, HPAGE_PMD_ORDER);
> > + if (!page)
> > + return -ENOMEM;
> > +
> > + addr = (unsigned long)page_address(page);
> > + split_page(page, HPAGE_PMD_ORDER);
> > +
> > + err = gen_pool_add(pool, addr, HPAGE_PMD_SIZE, NUMA_NO_NODE);
> > + if (err) {
> > + __free_pages(page, HPAGE_PMD_ORDER);
> > + return err;
> > + }
> > +
> > + __kernel_map_pages(page, nr_pages, 0);
>
> It's worth nothing that unlike flush_tlb_kernel_range(),
> __kernel_map_pages() only flushed local TLB, so other CPU may still have
> access to the page. It's shouldn't be a blocker, but deserve a comment.
Sure.
> > +
> > + return 0;
> > +}
> > +
--
Sincerely yours,
Mike.
Powered by blists - more mailing lists