[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <90471b2f-826e-4275-a9a3-ec673c3e6af8@redhat.com>
Date: Tue, 27 Feb 2024 08:30:38 +0100
From: David Hildenbrand <david@...hat.com>
To: Lance Yang <ioworker0@...il.com>, akpm@...ux-foundation.org
Cc: ryan.roberts@....com, 21cnbao@...il.com, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/1] mm/memory: Fix boundary check for next PFN in
folio_pte_batch()
On 27.02.24 08:04, Lance Yang wrote:
> Previously, in folio_pte_batch(), only the upper boundary of the
> folio was checked using '>=' for comparison. This led to
> incorrect behavior when the next PFN exceeded the lower boundary
> of the folio, especially in corner cases where the next PFN might
> fall into a different folio.
Which commit does this fix?
The introducing commit (f8d937761d65c87e9987b88ea7beb7bddc333a0e) is
already in mm-stable, so we would need a Fixes: tag. Unless, Ryan's
changes introduced a problem.
BUT
I don't see what is broken. :)
Can you please give an example/reproducer?
We know that the first PTE maps the folio. By incrementing the PFN using
pte_next_pfn/pte_advance_pfn, we cannot suddenly get a lower PFN.
So how would pte_advance_pfn(folio_start_pfn + X) suddenly give us a PFN
lower than folio_start_pfn?
Note that we are not really concerned about any kind of
pte_advance_pfn() overflow that could generate PFN=0. I convinces myself
that that that is something we don't have to worry about.
[I also thought about getting rid of the pte_pfn(pte) >= folio_end_pfn
and instead limiting end_ptep. But that requires more work before the
loop and feels more like a micro-optimization.]
>
> Signed-off-by: Lance Yang <ioworker0@...il.com>
> ---
> mm/memory.c | 7 +++++--
> 1 file changed, 5 insertions(+), 2 deletions(-)
>
> diff --git a/mm/memory.c b/mm/memory.c
> index 642b4f2be523..e5291d1e8c37 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -986,12 +986,15 @@ static inline int folio_pte_batch(struct folio *folio, unsigned long addr,
> pte_t *start_ptep, pte_t pte, int max_nr, fpb_t flags,
> bool *any_writable)
> {
> - unsigned long folio_end_pfn = folio_pfn(folio) + folio_nr_pages(folio);
> + unsigned long folio_start_pfn, folio_end_pfn;
> const pte_t *end_ptep = start_ptep + max_nr;
> pte_t expected_pte, *ptep;
> bool writable;
> int nr;
>
> + folio_start_pfn = folio_pfn(folio);
> + folio_end_pfn = folio_start_pfn + folio_nr_pages(folio);
> +
> if (any_writable)
> *any_writable = false;
>
> @@ -1015,7 +1018,7 @@ static inline int folio_pte_batch(struct folio *folio, unsigned long addr,
> * corner cases the next PFN might fall into a different
> * folio.
> */
> - if (pte_pfn(pte) >= folio_end_pfn)
> + if (pte_pfn(pte) >= folio_end_pfn || pte_pfn(pte) < folio_start_pfn)
> break;
>
> if (any_writable)
--
Cheers,
David / dhildenb
Powered by blists - more mailing lists