lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <6d67cbbf-a609-491e-a0c3-a3d5fd782966@linux.alibaba.com>
Date: Wed, 22 Oct 2025 15:47:27 +0800
From: Baolin Wang <baolin.wang@...ux.alibaba.com>
To: Kairui Song <ryncsn@...il.com>
Cc: linux-mm@...ck.org, Andrew Morton <akpm@...ux-foundation.org>,
 Hugh Dickins <hughd@...gle.com>, Dev Jain <dev.jain@....com>,
 David Hildenbrand <david@...hat.com>, Barry Song <baohua@...nel.org>,
 Liam Howlett <liam.howlett@...cle.com>,
 Lorenzo Stoakes <lorenzo.stoakes@...cle.com>,
 Mariano Pache <npache@...hat.com>, Matthew Wilcox <willy@...radead.org>,
 Ryan Roberts <ryan.roberts@....com>, Zi Yan <ziy@...dia.com>,
 linux-kernel@...r.kernel.org, stable@...r.kernel.org
Subject: Re: [PATCH] mm/shmem: fix THP allocation size check and fallback



On 2025/10/22 13:48, Kairui Song wrote:
> On Wed, Oct 22, 2025 at 9:25 AM Baolin Wang
> <baolin.wang@...ux.alibaba.com> wrote:
>>
>>
>>
>> On 2025/10/22 03:04, Kairui Song wrote:
>>> From: Kairui Song <kasong@...cent.com>
>>>
>>> There are some problems with the code implementations of THP fallback.
>>> suitable_orders could be zero, and calling highest_order on a zero value
>>> returns an overflowed size. And the order check loop is updating the
>>> index value on every loop which may cause the index to be aligned by a
>>> larger value while the loop shrinks the order.
>>
>> No, this is not true. Although ‘suitable_orders’ might be 0, it will not
>> enter the ‘while (suitable_orders)’ loop, and ‘order’ will not be used
>> (this is also how the highest_order() function is used in other places).
> 
> Maybe I shouldn't mix the trivial issue with the real issue here,
> sorry, my bad, I was in a hurry :P.
> I mean if suitable_orders is zero we should just skip calling the
> highest_order since that returns a negative value. It's not causing an
> issue though, but redundant.

I think compiler can optimize this(?).

>>> And it forgot to try order
>>> 0 after the final loop.
>>
>> This is also not true. We will fallback to order 0 allocation in
>> shmem_get_folio_gfp() if large order allocation fails.
> 
> I thought after the fix, we can simplify the code, and maybe reduce
> the call to shmem_alloc_and_add_folio to only one so it will be
> inlined by the compiler.
> 
> On second thought some more changes are needed to respect the
> huge_gfp. Maybe I should send a series to split the hot fix with clean
> ups.

Yes, I've considered simplifying this, but it would require more 
changes, making it less readable compared to how it is now. Of course, 
if you have a better approach, feel free to try cleaning it up in 
another thread.

> But here the index being modified during the loop do need a fix I
> think, so, for the fix part, we just need:
> 
> diff --git a/mm/shmem.c b/mm/shmem.c
> index 29e1eb690125..e89ae4dd6859 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -1895,10 +1895,11 @@ static struct folio
> *shmem_alloc_and_add_folio(struct vm_fault *vmf,
>                  order = highest_order(suitable_orders);
>                  while (suitable_orders) {
>                          pages = 1UL << order;
> -                       index = round_down(index, pages);
> -                       folio = shmem_alloc_folio(gfp, order, info, index);
> -                       if (folio)
> +                       folio = shmem_alloc_folio(gfp, order, info,
> round_down(index, pages));
> +                       if (folio) {
> +                               index = round_down(index, pages);
>                                  goto allocated;
> +                       }
> 
>                          if (pages == HPAGE_PMD_NR)
>                                  count_vm_event(THP_FILE_FALLBACK);

Good catch. I've fixed one similar issue with commit 4cbf320b1500 ("mm: 
shmem: fix incorrect aligned index when checking conflicts"), but I 
missed this part. Please resend the bugfix patch with a correct commit 
message and remove the cleanup changes.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ