[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAG48ez3OXbiruoATeSp-PZ9ZdDcFuwJ4+XCS6EgY_jrtcqqGcw@mail.gmail.com>
Date: Thu, 27 Jul 2023 18:10:12 +0200
From: Jann Horn <jannh@...gle.com>
To: Alan Stern <stern@...land.harvard.edu>
Cc: Will Deacon <will@...nel.org>, paulmck@...nel.org,
Andrew Morton <akpm@...ux-foundation.org>,
Linus Torvalds <torvalds@...uxfoundation.org>,
Peter Zijlstra <peterz@...radead.org>,
Suren Baghdasaryan <surenb@...gle.com>,
Matthew Wilcox <willy@...radead.org>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
Andrea Parri <parri.andrea@...il.com>,
Boqun Feng <boqun.feng@...il.com>,
Nicholas Piggin <npiggin@...il.com>,
David Howells <dhowells@...hat.com>,
Jade Alglave <j.alglave@....ac.uk>,
Luc Maranget <luc.maranget@...ia.fr>,
Akira Yokosawa <akiyks@...il.com>,
Daniel Lustig <dlustig@...dia.com>,
Joel Fernandes <joel@...lfernandes.org>
Subject: Re: [PATCH 0/2] fix vma->anon_vma check for per-VMA locking; fix
anon_vma memory ordering
On Thu, Jul 27, 2023 at 5:44 PM Alan Stern <stern@...land.harvard.edu> wrote:
> On Thu, Jul 27, 2023 at 03:57:47PM +0100, Will Deacon wrote:
> > On Thu, Jul 27, 2023 at 04:39:34PM +0200, Jann Horn wrote:
>
> > > Assume that we are holding some kind of lock that ensures that the
> > > only possible concurrent update to "vma->anon_vma" is that it changes
> > > from a NULL pointer to a non-NULL pointer (using smp_store_release()).
> > >
> > >
> > > if (READ_ONCE(vma->anon_vma) != NULL) {
> > > // we now know that vma->anon_vma cannot change anymore
> > >
> > > // access the same memory location again with a plain load
> > > struct anon_vma *a = vma->anon_vma;
> > >
> > > // this needs to be address-dependency-ordered against one of
> > > // the loads from vma->anon_vma
> > > struct anon_vma *root = a->root;
> > > }
>
> This reads a little oddly, perhaps because it's a fragment from a larger
> piece of code.
Yes, exactly. The READ_ONCE() would be in anon_vma_prepare(), which is
a helper used to ensure that a VMA is associated with an anon_vma, and
then the vma->anon_vma is used further down inside the fault handling
path. Something like:
do_cow_fault
anon_vma_prepare
READ_ONCE(vma->anon_vma)
barrier()
finish_fault
do_set_pte
page_add_new_anon_rmap
folio_add_new_anon_rmap
__page_set_anon_rmap
[reads vma->anon_vma]
Anyway, I guess I'll follow what Paul and Matthew said and go with
smp_load_acquire().
Powered by blists - more mailing lists