lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 17 Jun 2014 07:41:51 -0700
From:	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To:	Tejun Heo <tj@...nel.org>
Cc:	Christoph Lameter <cl@...ux-foundation.org>,
	David Howells <dhowells@...hat.com>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Oleg Nesterov <oleg@...hat.com>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH RFC] percpu: add data dependency barrier in percpu
 accessors and operations

On Thu, Jun 12, 2014 at 11:52:27AM -0400, Tejun Heo wrote:
> Hello, Paul.
> 
> On Thu, Jun 12, 2014 at 08:34:26AM -0700, Paul E. McKenney wrote:
> > > It can be argued that smp_read_barrier_depends() is the responsibility
> > > of the caller; however, discerning local and remote accesses gets very
> > > confusing with percpu areas (initialized remotely but local to this
> > > cpu and vice-versa) and data dependency barrier is free on all archs
> > > except for alpha, so I think it's better to include it as part of
> > > percpu accessors and operations.
> > 
> > OK, I will bite...  Did you find a bug of this form?  (I do see a
> > couple of possible bugs on a quick look, so maybe...)
> 
> I don't have an actual bug report or repro case, which isn't too
> surprising given that this can't happen on x86, but I know of places
> where percpu pointer is used opportunistically (if allocated) which
> would be affected by this.

Fair enough.

> > I would argue that the code above should instead say something like:
> > 
> > 	smp_store_release(p, alloc_percpu());
> 
> Well, we can always say that it's the caller's responsibility to put
> in enough barriers, but it becomes weird for percpu areas because the
> ownership of the allocated areas isn't really clear.  Whether each
> allocated cpu-specific area's queued writes belong to the allocating
> cpu or the cpu that the area is associated with is an implementation
> detail which may theoretically change and as such I think it'd be best
> for the problem to be dealt with in percpu allocator and accessors
> proper.  I think this is way too subtle to ask the users to handle
> correctly.

But in the case where the allocation is initially zeroed, then a few
fields are initialized to non-zero values, the memory barrier in the
allocator is insufficient.  I believe that it will be easier for people
to always use smp_store_release() or whatever than to not need it if
the initialization is strictly zero, but to need it if additional
initialization is required.

> > I was worried about use of per_cpu() by the reading CPU, but of course
> > in that case, things are initialized at compiler time.
> 
> Not really.  The template is initialized on kernel load.  The actual
> percpu areas have to be allocated dynamically; however, this happens
> while the machine is still running UP, so this should be okay.

Good point.  How about per-CPU variables that are introduced by
loadable modules?  (I would guess that there are plenty of memory
barriers in the load process, given that text and data also needs
to be visible to other CPUs.)

> > > I wonder whether we also need a smp_wmb() for other __GFP_ZERO
> > > allocations.  There isn't one and requiring the users to perform
> > > smp_wmb() to ensure the propagation of zeroing seems too subtle.
> > 
> > I would say "no" even if we do decide that alloc_percpu() needs
> > an smp_wmb().  The reason is that you really do have to store the
> > pointer at some point, and you should use smp_store_release() for
> > this task, at least if you are storing to something accessible to
> > other CPUs.
> 
> For normal allocations, I don't have a strong opinion.  I'd prefer to
> think of memory coming out of the allocator to have memory ordering
> already taken care of but it is a lot less murky than with percpu
> areas.

Again, it won't help for the allocator to strongly order the
initialization to zero if there are additional initializations of some
fields to non-zero values.  And again, it should be a lot easier to
require the smp_store_release() or whatever uniformly than only in cases
where additional initialization occurred.

							Thanx, Paul

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ