[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200207113417.GG14914@hirez.programming.kicks-ass.net>
Date: Fri, 7 Feb 2020 12:34:17 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Geert Uytterhoeven <geert@...ux-m68k.org>
Cc: linux-m68k <linux-m68k@...ts.linux-m68k.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Will Deacon <will@...nel.org>,
Michael Schmitz <schmitzmic@...il.com>,
Greg Ungerer <gerg@...ux-m68k.org>
Subject: Re: [PATCH -v2 08/10] m68k,mm: Extend table allocator for multiple
sizes
On Fri, Feb 07, 2020 at 11:56:40AM +0100, Geert Uytterhoeven wrote:
> Hoi Peter,
>
> On Fri, Jan 31, 2020 at 1:56 PM Peter Zijlstra <peterz@...radead.org> wrote:
> > In addition to the PGD/PMD table size (128*4) add a PTE table size
> > (64*4) to the table allocator. This completely removes the pte-table
> > overhead compared to the old code, even for dense tables.
>
> Thanks for your patch!
>
> > Notes:
> >
> > - the allocator gained a list_empty() check to deal with there not
> > being any pages at all.
> >
> > - the free mask is extended to cover more than the 8 bits required
> > for the (512 byte) PGD/PMD tables.
>
> Being an mm-illiterate, I don't understand the relation between the number
> of bits and the size (see below).
If the table translates 7 bits of the address, it will have 1<<7 entries.
> > - NR_PAGETABLE accounting is restored.
> >
> > Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
>
> WARNING: Missing Signed-off-by: line by nominal patch author 'Peter
> Zijlstra <peterz@...radead.org>'
> (in all patches)
>
> I can fix that (the From?) up while applying.
I'm not sure where that warning comes from, but if you feel it needs
fixing, sure. I normally only add the (Intel) thing to the SoB. I've so
far never had complaints about that.
> > --- a/arch/m68k/mm/motorola.c
> > +++ b/arch/m68k/mm/motorola.c
> > @@ -72,24 +72,35 @@ void mmu_page_dtor(void *page)
> > arch/sparc/mm/srmmu.c ... */
> >
> > typedef struct list_head ptable_desc;
> > -static LIST_HEAD(ptable_list);
> > +
> > +static struct list_head ptable_list[2] = {
> > + LIST_HEAD_INIT(ptable_list[0]),
> > + LIST_HEAD_INIT(ptable_list[1]),
> > +};
> >
> > #define PD_PTABLE(page) ((ptable_desc *)&(virt_to_page(page)->lru))
> > #define PD_PAGE(ptable) (list_entry(ptable, struct page, lru))
> > -#define PD_MARKBITS(dp) (*(unsigned char *)&PD_PAGE(dp)->index)
> > +#define PD_MARKBITS(dp) (*(unsigned int *)&PD_PAGE(dp)->index)
> > +
> > +static const int ptable_shift[2] = {
> > + 7+2, /* PGD, PMD */
> > + 6+2, /* PTE */
> > +};
> >
> > -#define PTABLE_SIZE (PTRS_PER_PMD * sizeof(pmd_t))
> > +#define ptable_size(type) (1U << ptable_shift[type])
> > +#define ptable_mask(type) ((1U << (PAGE_SIZE / ptable_size(type))) - 1)
>
> So this is 0xff for PGD and PMD, like before, and 0xffff for PTE.
> Why the latter value?
The PGD/PMD being 7 bits are sizeof(unsigned long) << 7, or 512 bytes
big. In one 4k page, there fit 8 such entries. 0xFF is 8 bits set, one
for each of the 8 512 byte fragments.
For the PTE tables, which are 6 bit and of sizeof(unsigned long) << 6,
or 256 bytes, we can fit 16 in one 4k page, resulting in 0xFFFF.
Powered by blists - more mailing lists