[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d9aff21a-7f37-470d-b798-abd1e354f2da@huawei.com>
Date: Tue, 24 Sep 2024 20:54:40 +0800
From: Kefeng Wang <wangkefeng.wang@...wei.com>
To: Dev Jain <dev.jain@....com>, <akpm@...ux-foundation.org>,
<david@...hat.com>, <willy@...radead.org>, <kirill.shutemov@...ux.intel.com>
CC: <ryan.roberts@....com>, <anshuman.khandual@....com>,
<catalin.marinas@....com>, <cl@...two.org>, <vbabka@...e.cz>,
<mhocko@...e.com>, <apopple@...dia.com>, <dave.hansen@...ux.intel.com>,
<will@...nel.org>, <baohua@...nel.org>, <jack@...e.cz>,
<mark.rutland@....com>, <hughd@...gle.com>, <aneesh.kumar@...nel.org>,
<yang@...amperecomputing.com>, <peterx@...hat.com>, <ioworker0@...il.com>,
<jglisse@...gle.com>, <ziy@...dia.com>,
<linux-arm-kernel@...ts.infradead.org>, <linux-kernel@...r.kernel.org>,
<linux-mm@...ck.org>
Subject: Re: [PATCH v5 1/2] mm: Abstract THP allocation
On 2024/9/24 20:17, Dev Jain wrote:
>
> On 9/24/24 16:50, Kefeng Wang wrote:
>>
>>
>> On 2024/9/24 18:16, Dev Jain wrote:
>>> In preparation for the second patch, abstract away the THP allocation
>>> logic present in the create_huge_pmd() path, which corresponds to the
>>> faulting case when no page is present.
>>>
>>> There should be no functional change as a result of applying this patch,
>>> except that, as David notes at [1], a PMD-aligned address should
>>> be passed to update_mmu_cache_pmd().
>>>
>>> [1]: https://lore.kernel.org/all/ddd3fcd2-48b3-4170-
>>> bcaa-2fe66e093f43@...hat.com/
>>>
>>> Acked-by: David Hildenbrand <david@...hat.com>
>>> Signed-off-by: Dev Jain <dev.jain@....com>
>>> ---
>>> mm/huge_memory.c | 98 ++++++++++++++++++++++++++++--------------------
>>> 1 file changed, 57 insertions(+), 41 deletions(-)
>>>
>>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>>> index 4e34b7f89daf..bdbf67c18f6c 100644
>>> --- a/mm/huge_memory.c
>>> +++ b/mm/huge_memory.c
>>> @@ -1148,47 +1148,81 @@ unsigned long thp_get_unmapped_area(struct
>>> file *filp, unsigned long addr,
>>> }
>>> EXPORT_SYMBOL_GPL(thp_get_unmapped_area);
>>> -static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf,
>>> - struct page *page, gfp_t gfp)
>>> +static struct folio *vma_alloc_anon_folio_pmd(struct vm_area_struct
>>> *vma,
>>> + unsigned long addr)
>>> {
>>> - struct vm_area_struct *vma = vmf->vma;
>>> - struct folio *folio = page_folio(page);
>>> - pgtable_t pgtable;
>>> - unsigned long haddr = vmf->address & HPAGE_PMD_MASK;
>>> - vm_fault_t ret = 0;
>>> + unsigned long haddr = addr & HPAGE_PMD_MASK;
>>> + gfp_t gfp = vma_thp_gfp_mask(vma);
>>> + const int order = HPAGE_PMD_ORDER;
>>> + struct folio *folio = vma_alloc_folio(gfp, order, vma, haddr,
>>> true);
>>
>> There is a warning without NUMA,
>>
>> ../mm/huge_memory.c: In function ‘vma_alloc_anon_folio_pmd’:
>> ../mm/huge_memory.c:1154:16: warning: unused variable ‘haddr’ [-
>> Wunused-variable]
>> 1154 | unsigned long haddr = addr & HPAGE_PMD_MASK;
>> | ^~~~~
>>
>
> But why is this happening?
If no CONFIG_NUMA, vma_alloc_folio(...) = folio_alloc_noprof(gfp, order),
it won't use haddr.
Powered by blists - more mailing lists