[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87769ae8-b6c6-4454-925d-1864364af9c8@huawei.com>
Date: Wed, 31 Jul 2024 17:59:23 +0800
From: Kefeng Wang <wangkefeng.wang@...wei.com>
To: Baolin Wang <baolin.wang@...ux.alibaba.com>, Barry Song
<21cnbao@...il.com>
CC: <akpm@...ux-foundation.org>, <hughd@...gle.com>, <willy@...radead.org>,
<david@...hat.com>, <ryan.roberts@....com>, <ziy@...dia.com>,
<gshan@...hat.com>, <ioworker0@...il.com>, <linux-mm@...ck.org>,
<linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 1/2] mm: shmem: avoid allocating huge pages larger than
MAX_PAGECACHE_ORDER for shmem
On 2024/7/31 16:56, Baolin Wang wrote:
>
>
> On 2024/7/31 14:18, Barry Song wrote:
>> On Wed, Jul 31, 2024 at 1:46 PM Baolin Wang
>> <baolin.wang@...ux.alibaba.com> wrote:
>>>
>>> Similar to commit d659b715e94ac ("mm/huge_memory: avoid PMD-size page
>>> cache
>>> if needed"), ARM64 can support 512MB PMD-sized THP when the base page
>>> size is
>>> 64KB, which is larger than the maximum supported page cache size
>>> MAX_PAGECACHE_ORDER.
>>> This is not expected. To fix this issue, use
>>> THP_ORDERS_ALL_FILE_DEFAULT for
>>> shmem to filter allowable huge orders.
>>>
>>> Fixes: e7a2ab7b3bb5 ("mm: shmem: add mTHP support for anonymous shmem")
>>> Signed-off-by: Baolin Wang <baolin.wang@...ux.alibaba.com>
>>
>> Reviewed-by: Barry Song <baohua@...nel.org>
>
> Thanks for reviewing.
>
>>
>>> ---
>>> mm/shmem.c | 4 ++--
>>> 1 file changed, 2 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/mm/shmem.c b/mm/shmem.c
>>> index 2faa9daaf54b..a4332a97558c 100644
>>> --- a/mm/shmem.c
>>> +++ b/mm/shmem.c
>>> @@ -1630,10 +1630,10 @@ unsigned long
>>> shmem_allowable_huge_orders(struct inode *inode,
>>> unsigned long within_size_orders =
>>> READ_ONCE(huge_shmem_orders_within_size);
>>> unsigned long vm_flags = vma->vm_flags;
>>> /*
>>> - * Check all the (large) orders below HPAGE_PMD_ORDER + 1 that
>>> + * Check all the (large) orders below MAX_PAGECACHE_ORDER + 1
>>> that
>>> * are enabled for this vma.
>>
>> Nit:
>> THP_ORDERS_ALL_FILE_DEFAULT should be self-explanatory enough.
>> I feel we don't need this comment?
>
> Sure.
>
> Andrew, please help to squash the following changes into this patch.
> Thanks.
Maybe drop unsigned long orders too?
diff --git a/mm/shmem.c b/mm/shmem.c
index 6af95f595d6f..8485eb6f2ec4 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -1638,11 +1638,6 @@ unsigned long shmem_allowable_huge_orders(struct
inode *inode,
unsigned long mask = READ_ONCE(huge_shmem_orders_always);
unsigned long within_size_orders =
READ_ONCE(huge_shmem_orders_within_size);
unsigned long vm_flags = vma ? vma->vm_flags : 0;
- /*
- * Check all the (large) orders below HPAGE_PMD_ORDER + 1 that
- * are enabled for this vma.
- */
- unsigned long orders = BIT(PMD_ORDER + 1) - 1;
bool global_huge;
loff_t i_size;
int order;
@@ -1698,7 +1693,7 @@ unsigned long shmem_allowable_huge_orders(struct
inode *inode,
if (global_huge)
mask |= READ_ONCE(huge_shmem_orders_inherit);
- return orders & mask;
+ return THP_ORDERS_ALL_FILE_DEFAULT & mask;
}
>
> diff --git a/mm/shmem.c b/mm/shmem.c
> index 6e9836b1bd1d..432faec21547 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -1629,10 +1629,6 @@ unsigned long shmem_allowable_huge_orders(struct
> inode *inode,
> unsigned long mask = READ_ONCE(huge_shmem_orders_always);
> unsigned long within_size_orders =
> READ_ONCE(huge_shmem_orders_within_size);
> unsigned long vm_flags = vma->vm_flags;
> - /*
> - * Check all the (large) orders below MAX_PAGECACHE_ORDER + 1 that
> - * are enabled for this vma.
> - */
> unsigned long orders = THP_ORDERS_ALL_FILE_DEFAULT;
> loff_t i_size;
> int order;
>
Powered by blists - more mailing lists