[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <450360af-9222-4251-8529-f44c4b8b498a@redhat.com>
Date: Wed, 31 Jul 2024 11:17:42 +0200
From: David Hildenbrand <david@...hat.com>
To: Baolin Wang <baolin.wang@...ux.alibaba.com>, akpm@...ux-foundation.org,
hughd@...gle.com
Cc: willy@...radead.org, 21cnbao@...il.com, ryan.roberts@....com,
ziy@...dia.com, gshan@...hat.com, ioworker0@...il.com, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/2] mm: shmem: avoid allocating huge pages larger than
MAX_PAGECACHE_ORDER for shmem
On 31.07.24 07:46, Baolin Wang wrote:
> Similar to commit d659b715e94ac ("mm/huge_memory: avoid PMD-size page cache
> if needed"), ARM64 can support 512MB PMD-sized THP when the base page size is
> 64KB, which is larger than the maximum supported page cache size MAX_PAGECACHE_ORDER.
> This is not expected. To fix this issue, use THP_ORDERS_ALL_FILE_DEFAULT for
> shmem to filter allowable huge orders.
>
> Fixes: e7a2ab7b3bb5 ("mm: shmem: add mTHP support for anonymous shmem")
> Signed-off-by: Baolin Wang <baolin.wang@...ux.alibaba.com>
> ---
> mm/shmem.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/mm/shmem.c b/mm/shmem.c
> index 2faa9daaf54b..a4332a97558c 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -1630,10 +1630,10 @@ unsigned long shmem_allowable_huge_orders(struct inode *inode,
> unsigned long within_size_orders = READ_ONCE(huge_shmem_orders_within_size);
> unsigned long vm_flags = vma->vm_flags;
> /*
> - * Check all the (large) orders below HPAGE_PMD_ORDER + 1 that
> + * Check all the (large) orders below MAX_PAGECACHE_ORDER + 1 that
> * are enabled for this vma.
> */
> - unsigned long orders = BIT(PMD_ORDER + 1) - 1;
> + unsigned long orders = THP_ORDERS_ALL_FILE_DEFAULT;
> loff_t i_size;
> int order;
>
Acked-by: David Hildenbrand <david@...hat.com>
--
Cheers,
David / dhildenb
Powered by blists - more mailing lists