[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20171016144410.z3vcx3mddb53l3sq@node.shutemov.name>
Date: Mon, 16 Oct 2017 17:44:10 +0300
From: "Kirill A. Shutemov" <kirill@...temov.name>
To: Nitin Gupta <ngupta@...are.org>
Cc: "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
Ingo Molnar <mingo@...hat.com>,
Linus Torvalds <torvalds@...ux-foundation.org>, x86@...nel.org,
Thomas Gleixner <tglx@...utronix.de>,
"H. Peter Anvin" <hpa@...or.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Andy Lutomirski <luto@...capital.net>,
Cyrill Gorcunov <gorcunov@...nvz.org>,
Borislav Petkov <bp@...e.de>, linux-mm <linux-mm@...ck.org>,
linux-kernel <linux-kernel@...r.kernel.org>,
Minchan Kim <minchan@...nel.org>,
Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>
Subject: Re: [PATCH 2/6] mm/zsmalloc: Prepare to variable MAX_PHYSMEM_BITS
On Fri, Oct 13, 2017 at 05:00:12PM -0700, Nitin Gupta wrote:
> On Fri, Sep 29, 2017 at 7:08 AM, Kirill A. Shutemov
> <kirill.shutemov@...ux.intel.com> wrote:
> > With boot-time switching between paging mode we will have variable
> > MAX_PHYSMEM_BITS.
> >
> > Let's use the maximum variable possible for CONFIG_X86_5LEVEL=y
> > configuration to define zsmalloc data structures.
> >
> > The patch introduces MAX_POSSIBLE_PHYSMEM_BITS to cover such case.
> > It also suits well to handle PAE special case.
> >
> > Signed-off-by: Kirill A. Shutemov <kirill.shutemov@...ux.intel.com>
> > Cc: Minchan Kim <minchan@...nel.org>
> > Cc: Nitin Gupta <ngupta@...are.org>
> > Cc: Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>
> > ---
> > arch/x86/include/asm/pgtable-3level_types.h | 1 +
> > arch/x86/include/asm/pgtable_64_types.h | 2 ++
> > mm/zsmalloc.c | 13 +++++++------
> > 3 files changed, 10 insertions(+), 6 deletions(-)
> >
> > diff --git a/arch/x86/include/asm/pgtable-3level_types.h b/arch/x86/include/asm/pgtable-3level_types.h
> > index b8a4341faafa..3fe1d107a875 100644
> > --- a/arch/x86/include/asm/pgtable-3level_types.h
> > +++ b/arch/x86/include/asm/pgtable-3level_types.h
> > @@ -43,5 +43,6 @@ typedef union {
> > */
> > #define PTRS_PER_PTE 512
> >
> > +#define MAX_POSSIBLE_PHYSMEM_BITS 36
> >
> > #endif /* _ASM_X86_PGTABLE_3LEVEL_DEFS_H */
> > diff --git a/arch/x86/include/asm/pgtable_64_types.h b/arch/x86/include/asm/pgtable_64_types.h
> > index 06470da156ba..39075df30b8a 100644
> > --- a/arch/x86/include/asm/pgtable_64_types.h
> > +++ b/arch/x86/include/asm/pgtable_64_types.h
> > @@ -39,6 +39,8 @@ typedef struct { pteval_t pte; } pte_t;
> > #define P4D_SIZE (_AC(1, UL) << P4D_SHIFT)
> > #define P4D_MASK (~(P4D_SIZE - 1))
> >
> > +#define MAX_POSSIBLE_PHYSMEM_BITS 52
> > +
> > #else /* CONFIG_X86_5LEVEL */
> >
> > /*
> > diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> > index 7c38e850a8fc..7bde01c55c90 100644
> > --- a/mm/zsmalloc.c
> > +++ b/mm/zsmalloc.c
> > @@ -82,18 +82,19 @@
> > * This is made more complicated by various memory models and PAE.
> > */
> >
> > -#ifndef MAX_PHYSMEM_BITS
> > -#ifdef CONFIG_HIGHMEM64G
> > -#define MAX_PHYSMEM_BITS 36
> > -#else /* !CONFIG_HIGHMEM64G */
> > +#ifndef MAX_POSSIBLE_PHYSMEM_BITS
> > +#ifdef MAX_PHYSMEM_BITS
> > +#define MAX_POSSIBLE_PHYSMEM_BITS MAX_PHYSMEM_BITS
> > +#else
> > /*
> > * If this definition of MAX_PHYSMEM_BITS is used, OBJ_INDEX_BITS will just
> > * be PAGE_SHIFT
> > */
> > -#define MAX_PHYSMEM_BITS BITS_PER_LONG
> > +#define MAX_POSSIBLE_PHYSMEM_BITS BITS_PER_LONG
> > #endif
> > #endif
> > -#define _PFN_BITS (MAX_PHYSMEM_BITS - PAGE_SHIFT)
> > +
> > +#define _PFN_BITS (MAX_POSSIBLE_PHYSMEM_BITS - PAGE_SHIFT)
> >
>
>
> I think we can avoid using this new constant in zsmalloc.
>
> The reason for trying to save on MAX_PHYSMEM_BITS is just to gain more
> bits for OBJ_INDEX_BITS which would reduce ZS_MIN_ALLOC_SIZE. However,
> for all practical values of ZS_MAX_PAGES_PER_ZSPAGE, this min size
> would remain 32 bytes.
>
> So, we can unconditionally use MAX_PHYSMEM_BITS = BITS_PER_LONG and
> thus OBJ_INDEX_BITS = PAGE_SHIFT.
As you understand the topic better than me, could you prepare the patch?
--
Kirill A. Shutemov
Powered by blists - more mailing lists