[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1d39b66e-4009-4143-a8fa-5d876bc1f7e7@linux.dev>
Date: Fri, 27 Jun 2025 15:15:41 +0800
From: Lance Yang <lance.yang@...ux.dev>
To: Barry Song <21cnbao@...il.com>
Cc: akpm@...ux-foundation.org, david@...hat.com,
baolin.wang@...ux.alibaba.com, chrisl@...nel.org, kasong@...cent.com,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, linux-riscv@...ts.infradead.org,
lorenzo.stoakes@...cle.com, ryan.roberts@....com, v-songbaohua@...o.com,
x86@...nel.org, huang.ying.caritas@...il.com, zhengtangquan@...o.com,
riel@...riel.com, Liam.Howlett@...cle.com, vbabka@...e.cz,
harry.yoo@...cle.com, mingzhe.yang@...com, stable@...r.kernel.org,
Lance Yang <ioworker0@...il.com>
Subject: Re: [PATCH v2 1/1] mm/rmap: fix potential out-of-bounds page table
access during batched unmap
On 2025/6/27 14:55, Barry Song wrote:
> On Fri, Jun 27, 2025 at 6:52 PM Barry Song <21cnbao@...il.com> wrote:
>>
>> On Fri, Jun 27, 2025 at 6:23 PM Lance Yang <ioworker0@...il.com> wrote:
>>>
>>> From: Lance Yang <lance.yang@...ux.dev>
>>>
>>> As pointed out by David[1], the batched unmap logic in try_to_unmap_one()
>>> can read past the end of a PTE table if a large folio is mapped starting at
>>> the last entry of that table. It would be quite rare in practice, as
>>> MADV_FREE typically splits the large folio ;)
>>>
>>> So let's fix the potential out-of-bounds read by refactoring the logic into
>>> a new helper, folio_unmap_pte_batch().
>>>
>>> The new helper now correctly calculates the safe number of pages to scan by
>>> limiting the operation to the boundaries of the current VMA and the PTE
>>> table.
>>>
>>> In addition, the "all-or-nothing" batching restriction is removed to
>>> support partial batches. The reference counting is also cleaned up to use
>>> folio_put_refs().
>>>
>>> [1] https://lore.kernel.org/linux-mm/a694398c-9f03-4737-81b9-7e49c857fcbe@redhat.com
>>>
>>
>> What about ?
>>
>> As pointed out by David[1], the batched unmap logic in try_to_unmap_one()
>> may read past the end of a PTE table when a large folio spans across two PMDs,
>> particularly after being remapped with mremap(). This patch fixes the
>> potential out-of-bounds access by capping the batch at vm_end and the PMD
>> boundary.
>>
>> It also refactors the logic into a new helper, folio_unmap_pte_batch(),
>> which supports batching between 1 and folio_nr_pages. This improves code
>> clarity. Note that such cases are rare in practice, as MADV_FREE typically
>> splits large folios.
>
> Sorry, I meant that MADV_FREE typically splits large folios if the specified
> range doesn't cover the entire folio.
Hmm... I got it wrong as well :( It's the partial coverage that triggers
the split.
how about this revised version:
As pointed out by David[1], the batched unmap logic in try_to_unmap_one()
may read past the end of a PTE table when a large folio spans across two
PMDs, particularly after being remapped with mremap(). This patch fixes
the potential out-of-bounds access by capping the batch at vm_end and the
PMD boundary.
It also refactors the logic into a new helper, folio_unmap_pte_batch(),
which supports batching between 1 and folio_nr_pages. This improves code
clarity. Note that such boundary-straddling cases are rare in practice, as
MADV_FREE will typically split a large folio if the advice range does not
cover the entire folio.
Powered by blists - more mailing lists