[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20250627130945.dd074c7ea076359ac754a029@linux-foundation.org>
Date: Fri, 27 Jun 2025 13:09:45 -0700
From: Andrew Morton <akpm@...ux-foundation.org>
To: Lance Yang <ioworker0@...il.com>
Cc: david@...hat.com, 21cnbao@...il.com, baolin.wang@...ux.alibaba.com,
chrisl@...nel.org, kasong@...cent.com,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, linux-riscv@...ts.infradead.org,
lorenzo.stoakes@...cle.com, ryan.roberts@....com, v-songbaohua@...o.com,
x86@...nel.org, huang.ying.caritas@...il.com, zhengtangquan@...o.com,
riel@...riel.com, Liam.Howlett@...cle.com, vbabka@...e.cz,
harry.yoo@...cle.com, mingzhe.yang@...com, stable@...r.kernel.org, Barry
Song <baohua@...nel.org>, Lance Yang <lance.yang@...ux.dev>
Subject: Re: [PATCH v2 1/1] mm/rmap: fix potential out-of-bounds page table
access during batched unmap
On Fri, 27 Jun 2025 14:23:19 +0800 Lance Yang <ioworker0@...il.com> wrote:
> As pointed out by David[1], the batched unmap logic in try_to_unmap_one()
> can read past the end of a PTE table if a large folio is mapped starting at
> the last entry of that table. It would be quite rare in practice, as
> MADV_FREE typically splits the large folio ;)
>
> So let's fix the potential out-of-bounds read by refactoring the logic into
> a new helper, folio_unmap_pte_batch().
>
> The new helper now correctly calculates the safe number of pages to scan by
> limiting the operation to the boundaries of the current VMA and the PTE
> table.
>
> In addition, the "all-or-nothing" batching restriction is removed to
> support partial batches. The reference counting is also cleaned up to use
> folio_put_refs().
I'll queue this for testing while the updated changelog is being prepared.
Powered by blists - more mailing lists