[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1593735251.svr5r5cxle.astroid@bobo.none>
Date: Fri, 03 Jul 2020 10:15:34 +1000
From: Nicholas Piggin <npiggin@...il.com>
To: linux-mm@...ck.org, Zefan Li <lizefan@...wei.com>
Cc: Borislav
Petkov <bp@...en8.de>,
Catalin Marinas <catalin.marinas@....com>,
"H. Peter Anvin" <hpa@...or.com>, linux-arch@...r.kernel.org,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
linuxppc-dev@...ts.ozlabs.org, Ingo Molnar <mingo@...hat.com>,
Thomas
Gleixner <tglx@...utronix.de>,
Will Deacon <will@...nel.org>, x86@...nel.org
Subject: Re: [PATCH v2 4/4] mm/vmalloc: Hugepage vmalloc mappings
Excerpts from Zefan Li's message of July 1, 2020 5:10 pm:
>> static void *__vmalloc_node(unsigned long size, unsigned long align,
>> - gfp_t gfp_mask, pgprot_t prot,
>> - int node, const void *caller);
>> + gfp_t gfp_mask, pgprot_t prot, unsigned long vm_flags,
>> + int node, const void *caller);
>> static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask,
>> - pgprot_t prot, int node)
>> + pgprot_t prot, unsigned int page_shift,
>> + int node)
>> {
>> struct page **pages;
>> + unsigned long addr = (unsigned long)area->addr;
>> + unsigned long size = get_vm_area_size(area);
>> + unsigned int page_order = page_shift - PAGE_SHIFT;
>> unsigned int nr_pages, array_size, i;
>> const gfp_t nested_gfp = (gfp_mask & GFP_RECLAIM_MASK) | __GFP_ZERO;
>> const gfp_t alloc_mask = gfp_mask | __GFP_NOWARN;
>> const gfp_t highmem_mask = (gfp_mask & (GFP_DMA | GFP_DMA32)) ?
>> - 0 :
>> - __GFP_HIGHMEM;
>> + 0 : __GFP_HIGHMEM;
>>
>> - nr_pages = get_vm_area_size(area) >> PAGE_SHIFT;
>> + nr_pages = size >> page_shift;
>
> while try out this patchset, we encountered a BUG_ON in account_kernel_stack()
> in kernel/fork.c.
>
> BUG_ON(vm->nr_pages != THREAD_SIZE / PAGE_SIZE);
>
> which obviously should be updated accordingly.
Thanks for finding that. We may have to change this around a bit so
nr_pages still appears to be in PAGE_SIZE units for anybody looking.
Thanks,
Nick
Powered by blists - more mailing lists