lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 16 Oct 2008 05:06:14 +1100
From:	Nick Piggin <nickpiggin@...oo.com.au>
To:	Matt Mackall <mpm@...enic.com>, Hugh Dickins <hugh@...itas.com>
Cc:	torvalds@...ux-foundation.org,
	Pekka Enberg <penberg@...helsinki.fi>, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org
Subject: Re: [rfc] SLOB memory ordering issue

On Thursday 16 October 2008 04:10, Nick Piggin wrote:
> On Thursday 16 October 2008 03:54, Matt Mackall wrote:
> > On Thu, 2008-10-16 at 03:34 +1100, Nick Piggin wrote:
> > > I think I see a possible memory ordering problem with SLOB:
> > > In slab caches with constructors, the constructor is run
> > > before returning the object to caller, with no memory barrier
> > > afterwards.
> > >
> > > Now there is nothing that indicates the _exact_ behaviour
> > > required here. Is it at all reasonable to expect ->ctor() to
> > > be visible to all CPUs and not just the allocating CPU?
> >
> > Do you have a failure scenario in mind?
> >
> > First, it's a categorical mistake for another CPU to be looking at the
> > contents of an object unless it knows that it's in an allocated state.
> >
> > For another CPU to receive that knowledge (by reading a causally-valid
> > pointer to it in memory), a memory barrier has to occur, no?
>
> No.
>
> For (slightly bad) example. Some architectures have a ctor for their
> page table page slabs. Not a bad thing to do.
>
> Now they allocate these guys, take a lock, then insert them into the
> page tables. The lock is only an acquire barrier, so it can leak past
> stores.
>
> The read side is all lockless and in some cases even done by hardware.
>
> Now in _practice_, this is not a problem because some architectures
> don't have ctors, and I spotted the issue and put proper barriers in
> there. But it was a known fact that ctors were always used, and if I
> had assumed ctors were barriers so didn't need the wmb, then there
> would be a bug.
>
> Especially the fact that a lock doesn't order the stores makes me
> worried that a lockless read-side algorithm might have a bug here.
> Fortunately, most of them use RCU and probably use rcu_assign_pointer
> even if they do have ctors. But I can't be sure...

OK, now I have something that'll blow your fuckin mind.

anon_vma_cachep.

P0
do_anonymous_page()
 anon_vma_prepare()
  ctor(anon_vma)
  [sets vma->anon_vma = anon_vma]

P1
do_anonymous_page()
 anon_vma_prepare()
  [sees P0 already allocated it]
 lru_cache_add(page)
  [flushes page to lru]
 page_add_anon_rmap (increments mapcount)
  page_set_anon_rmap
   [sets page->anon_vma = anon_vma]

P2
find page from lru
page_referenced()
 page_referenced_anon()
  page_lock_anon_vma()
   [loads anon_vma from page->anon_vma]
   spin_lock(&anon_vma->lock)


Who was it that said memory ordering was self-evident?

For everyone else:

P1 sees P0's store to vma->anon_vma, then P2 sees P1's store
to page->anon_vma (among others), but P2 does not see P0's ctor
store to initialise anon_vma->lock.

And there seems like another bug there too, but just a plain control
race rather than strictly[*] a data race, P0 is executing list_add_tail
of vma to anon_vma->head at some point here too, so even assuming
we're running on a machine with transitive store ordering, then the
above race can't hit, then P2 subsequently wants to run a
list_for_each_entry over anon_vma->head while P0 is in the process of
modifying it.

Am I the one who's bamboozled, or can anyone confirm?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ