[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ddd59fdc-3d8f-4015-e851-e7f099193a1b@c-s.fr>
Date: Sat, 12 Jan 2019 14:49:29 +0100
From: Christophe Leroy <christophe.leroy@....fr>
To: Matthew Wilcox <willy@...radead.org>,
Anshuman Khandual <anshuman.khandual@....com>
Cc: mark.rutland@....com, mhocko@...e.com, linux-sh@...r.kernel.org,
peterz@...radead.org, catalin.marinas@....com,
dave.hansen@...ux.intel.com, will.deacon@....com,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
kvmarm@...ts.cs.columbia.edu, linux@...linux.org.uk,
mingo@...hat.com, vbabka@...e.cz, rientjes@...gle.com,
marc.zyngier@....com, rppt@...ux.vnet.ibm.com, shakeelb@...gle.com,
kirill@...temov.name, tglx@...utronix.de,
linux-arm-kernel@...ts.infradead.org, ard.biesheuvel@...aro.org,
robin.murphy@....com, steve.capper@....com,
christoffer.dall@....com, james.morse@....com,
aneesh.kumar@...ux.ibm.com, akpm@...ux-foundation.org,
linuxppc-dev@...ts.ozlabs.org
Subject: Re: [PATCH] mm: Introduce GFP_PGTABLE
Le 12/01/2019 à 13:12, Matthew Wilcox a écrit :
> On Sat, Jan 12, 2019 at 03:56:38PM +0530, Anshuman Khandual wrote:
>> All architectures have been defining their own PGALLOC_GFP as (GFP_KERNEL |
>> __GFP_ZERO) and using it for allocating page table pages.
>
> Except that's not true.
>
>> +++ b/arch/x86/mm/pgtable.c
>> @@ -13,19 +13,17 @@ phys_addr_t physical_mask __ro_after_init = (1ULL << __PHYSICAL_MASK_SHIFT) - 1;
>> EXPORT_SYMBOL(physical_mask);
>> #endif
>>
>> -#define PGALLOC_GFP (GFP_KERNEL_ACCOUNT | __GFP_ZERO)
>> -
>> #ifdef CONFIG_HIGHPTE
>
> ...
>
>> pte_t *pte_alloc_one_kernel(struct mm_struct *mm)
>> {
>> - return (pte_t *)__get_free_page(PGALLOC_GFP & ~__GFP_ACCOUNT);
>> + return (pte_t *)__get_free_page(GFP_PGTABLE & ~__GFP_ACCOUNT);
>> }
As far as I can see,
#define GFP_KERNEL_ACCOUNT (GFP_KERNEL | __GFP_ACCOUNT)
So what's the difference between:
(GFP_KERNEL_ACCOUNT | __GFP_ZERO) & ~__GFP_ACCOUNT
and
(GFP_KERNEL | __GFP_ZERO) & ~__GFP_ACCOUNT
Christophe
>
> I think x86 was the only odd one out here, but you'll need to try again ...
>
Powered by blists - more mailing lists