lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAEXW_YTC0zJwxrweOLxm-k6gL+AcxZfopHPrJgGOihNrOKFJ3g@mail.gmail.com>
Date: Tue, 8 Oct 2024 21:46:59 -0400
From: Joel Fernandes <joel@...lfernandes.org>
To: Jann Horn <jannh@...gle.com>
Cc: akpm@...ux-foundation.org, david@...hat.com, linux-mm@...ck.org, 
	willy@...radead.org, hughd@...gle.com, lorenzo.stoakes@...cle.com, 
	linux-kernel@...r.kernel.org, stable@...r.kernel.org
Subject: Re: [PATCH] mm/mremap: Fix move_normal_pmd/retract_page_tables race

On Tue, Oct 8, 2024 at 8:50 PM Jann Horn <jannh@...gle.com> wrote:
>
> On Wed, Oct 9, 2024 at 1:58 AM Joel Fernandes <joel@...lfernandes.org> wrote:
> > On Mon, Oct 7, 2024 at 5:42 PM Jann Horn <jannh@...gle.com> wrote:
> > Not to overthink it, but do you have any insight into why copy_vma()
> > only requires the rmap lock under this condition?
> >
> > *need_rmap_locks = (new_vma->vm_pgoff <= vma->vm_pgoff);
> >
> > Could a collapse still occur when need_rmap_locks is false,
> > potentially triggering the bug you described? My assumption is no, but
> > I wanted to double-check.
>
> Ah, that code is a bit confusing. There are actually two circumstances
> under which we take rmap locks, and that condition only captures (part
> of) the first one:
>
> 1. when we might move PTEs against rmap traversal order (we need the
> lock so that concurrent rmap traversal can't miss the PTEs).

Ah ok, I see this mentioned in move_ptes(). Thanks for clarifying.

> 2. when we move page tables (otherwise concurrent rmap traversal could
> race with page table changes)

Ah ok, and these are the 4 call sites you mention below. Makes sense.

> If you look at the four callsites of move_pgt_entry(), you can see
> that its parameter "need_rmap_locks" sometimes comes from the caller's
> "need_rmap_locks" variable (in the HPAGE_PUD and HPAGE_PMD cases), but
> other times it is just hardcoded to true (in the NORMAL_PUD and
> NORMAL_PMD cases).
> So move_normal_pmd() always holds rmap locks.
> (This code would probably be a bit clearer if we moved the rmap
> locking into the helpers move_{normal,huge}_{pmd,pud} and got rid of
> the helper move_pgt_entry()...)

Thanks for clarifying, this makes a lot of sense now. So the
optimization is when we move ptes forward, we don't need the rmap lock
because the rmap code is cool with that due to traversal order. Ok.
And that definitely doesn't apply to move_normal_pmd() as you
mentioned I guess :).

> (Also, note that when undoing the PTE moves with the second
> move_page_tables() call, the "need_rmap_locks" parameter to
> move_page_tables() is hardcoded to true.)

Sure.

> > The patch looks good to me overall. I was also curious if
> > move_normal_pud() would require a similar change, though I’m inclined
> > to think that path doesn't lead to a bug.
>
> Yeah, there is no path that would remove PUD entries pointing to page
> tables through the rmap, that's a special PMD entry thing. (Well, at
> least not in non-hugetlb code, I haven't looked at hugetlb in a long
> time - but hugetlb has an entirely separate codepath for moving page
> tables, move_hugetlb_page_tables().)

Makes sense. TIL ;-)

thanks!

 - Joel

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ