[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ef5683ac-807a-f187-1cb0-1a5566174d85@huawei.com>
Date: Fri, 15 Oct 2021 10:20:48 +0800
From: Chen Wandun <chenwandun@...wei.com>
To: Uladzislau Rezki <urezki@...il.com>
CC: <akpm@...ux-foundation.org>, <npiggin@...il.com>,
<linux-mm@...ck.org>, <linux-kernel@...r.kernel.org>,
<edumazet@...gle.com>, <wangkefeng.wang@...wei.com>,
<guohanjun@...wei.com>
Subject: Re: [PATCH] mm/vmalloc: fix numa spreading for large hash tables
在 2021/10/14 18:01, Uladzislau Rezki 写道:
> On Tue, Sep 28, 2021 at 08:10:40PM +0800, Chen Wandun wrote:
>> Eric Dumazet reported a strange numa spreading info in [1], and found
>> commit 121e6f3258fe ("mm/vmalloc: hugepage vmalloc mappings") introduced
>> this issue [2].
>>
>> Dig into the difference before and after this patch, page allocation has
>> some difference:
>>
>> before:
>> alloc_large_system_hash
>> __vmalloc
>> __vmalloc_node(..., NUMA_NO_NODE, ...)
>> __vmalloc_node_range
>> __vmalloc_area_node
>> alloc_page /* because NUMA_NO_NODE, so choose alloc_page branch */
>> alloc_pages_current
>> alloc_page_interleave /* can be proved by print policy mode */
>>
>> after:
>> alloc_large_system_hash
>> __vmalloc
>> __vmalloc_node(..., NUMA_NO_NODE, ...)
>> __vmalloc_node_range
>> __vmalloc_area_node
>> alloc_pages_node /* choose nid by nuam_mem_id() */
>> __alloc_pages_node(nid, ....)
>>
>> So after commit 121e6f3258fe ("mm/vmalloc: hugepage vmalloc mappings"),
>> it will allocate memory in current node instead of interleaving allocate
>> memory.
>>
>> [1]
>> https://lore.kernel.org/linux-mm/CANn89iL6AAyWhfxdHO+jaT075iOa3XcYn9k6JJc7JR2XYn6k_Q@mail.gmail.com/
>>
>> [2]
>> https://lore.kernel.org/linux-mm/CANn89iLofTR=AK-QOZY87RdUZENCZUT4O6a0hvhu3_EwRMerOg@mail.gmail.com/
>>
>> Fixes: 121e6f3258fe ("mm/vmalloc: hugepage vmalloc mappings")
>> Reported-by: Eric Dumazet <edumazet@...gle.com>
>> Signed-off-by: Chen Wandun <chenwandun@...wei.com>
>> ---
>> mm/vmalloc.c | 33 ++++++++++++++++++++++++++-------
>> 1 file changed, 26 insertions(+), 7 deletions(-)
>>
>> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
>> index f884706c5280..48e717626e94 100644
>> --- a/mm/vmalloc.c
>> +++ b/mm/vmalloc.c
>> @@ -2823,6 +2823,8 @@ vm_area_alloc_pages(gfp_t gfp, int nid,
>> unsigned int order, unsigned int nr_pages, struct page **pages)
>> {
>> unsigned int nr_allocated = 0;
>> + struct page *page;
>> + int i;
>>
>> /*
>> * For order-0 pages we make use of bulk allocator, if
>> @@ -2833,6 +2835,7 @@ vm_area_alloc_pages(gfp_t gfp, int nid,
>> if (!order) {
>> while (nr_allocated < nr_pages) {
>> unsigned int nr, nr_pages_request;
>> + page = NULL;
>>
>> /*
>> * A maximum allowed request is hard-coded and is 100
>> @@ -2842,9 +2845,23 @@ vm_area_alloc_pages(gfp_t gfp, int nid,
>> */
>> nr_pages_request = min(100U, nr_pages - nr_allocated);
>>
>> - nr = alloc_pages_bulk_array_node(gfp, nid,
>> - nr_pages_request, pages + nr_allocated);
>> -
>> + if (nid == NUMA_NO_NODE) {
>>
> <snip>
> void *vmalloc(unsigned long size)
> {
> return __vmalloc_node(size, 1, GFP_KERNEL, NUMA_NO_NODE,
> __builtin_return_address(0));
> }
> EXPORT_SYMBOL(vmalloc);
> <snip>
>
> vmalloc() uses NUMA_NO_NODE, so all vmalloc calls will be reverted to a single
> page allocator for NUMA and non-NUMA systems. Is it intentional to bypass the
> optimized bulk allocator for non-NUMA systems?
I sent a patch, it will help to solve this.
[PATCH] mm/vmalloc: introduce alloc_pages_bulk_array_mempolicy to
accelerate memory allocation
Thanks,
Wandun
>
> Thanks!
>
> --
> Vlad Rezki
> .
>
Powered by blists - more mailing lists