[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <74609b22-d030-47d2-b4e5-5f9e80ca06e6@linux.alibaba.com>
Date: Fri, 20 Dec 2024 09:26:33 +0800
From: Baolin Wang <baolin.wang@...ux.alibaba.com>
To: David Hildenbrand <david@...hat.com>, akpm@...ux-foundation.org,
hughd@...gle.com
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/2] mm: shmem: fix incorrect index alignment for
within_size policy
On 2024/12/19 23:35, David Hildenbrand wrote:
> On 19.12.24 08:30, Baolin Wang wrote:
>> With enabling the shmem per-size within_size policy, using an incorrect
>> 'order' size to round_up() the index can lead to incorrect i_size checks,
>> resulting in an inappropriate large orders being returned.
>>
>> Changing to use '1 << order' to round_up() the index to fix this issue.
>> Additionally, adding an 'aligned_index' variable to avoid affecting the
>> index checks.
>>
>> Fixes: e7a2ab7b3bb5 ("mm: shmem: add mTHP support for anonymous shmem")
>> Signed-off-by: Baolin Wang <baolin.wang@...ux.alibaba.com>
>> ---
>> Hi Andrew,
>>
>> These two bugfix patches are based on the mm-hotfixes-unstable branch,
>> and this patch has a slight conflict with my previous patch set:
>> "Support large folios for tmpfs". However, I think the conflicts are
>> easy to resolve. If you need me to rebase and resend the
>> "Support large folios for tmpfs" patch set, please let me know.
>> Sorry for the troubles :)
>> ---
>> mm/shmem.c | 5 +++--
>> 1 file changed, 3 insertions(+), 2 deletions(-)
>>
>> diff --git a/mm/shmem.c b/mm/shmem.c
>> index f6fb053ac50d..dec659e84562 100644
>> --- a/mm/shmem.c
>> +++ b/mm/shmem.c
>> @@ -1689,6 +1689,7 @@ unsigned long shmem_allowable_huge_orders(struct
>> inode *inode,
>> unsigned long mask = READ_ONCE(huge_shmem_orders_always);
>> unsigned long within_size_orders =
>> READ_ONCE(huge_shmem_orders_within_size);
>> unsigned long vm_flags = vma ? vma->vm_flags : 0;
>> + pgoff_t aligned_index;
>> bool global_huge;
>> loff_t i_size;
>> int order;
>> @@ -1723,9 +1724,9 @@ unsigned long shmem_allowable_huge_orders(struct
>> inode *inode,
>> /* Allow mTHP that will be fully within i_size. */
>> order = highest_order(within_size_orders);
>> while (within_size_orders) {
>> - index = round_up(index + 1, order);
>> + aligned_index = round_up(index + 1, 1 << order);
>> i_size = round_up(i_size_read(inode), PAGE_SIZE);
>> - if (i_size >> PAGE_SHIFT >= index) {
>> + if (i_size >> PAGE_SHIFT >= aligned_index) {
>> mask |= within_size_orders;
>> break;
>> }
>
>
> Yes, that matches the logic in shmem_huge_global_enabled().
>
> Acked-by: David Hildenbrand <david@...hat.com>
>
>
> Was wondering if one can factor that out into a helper where one could
> pass an optional write_end ...
Yes, add it into my TODO list. Thanks for reviewing.
Powered by blists - more mailing lists