lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 21 Jan 2013 15:24:40 +0000
From:	Russell King - ARM Linux <linux@....linux.org.uk>
To:	Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>
Cc:	Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
	Matt Sealey <matt@...esi-usa.com>,
	Linux ARM Kernel ML <linux-arm-kernel@...ts.infradead.org>,
	devel@...verdev.osuosl.org, LKML <linux-kernel@...r.kernel.org>,
	Minchan Kim <minchan@...nel.org>,
	Nitin Gupta <ngupta@...are.org>,
	Seth Jennings <sjenning@...ux.vnet.ibm.com>
Subject: Re: Compilation problem with drivers/staging/zsmalloc when !SMP on
	ARM

On Fri, Jan 18, 2013 at 11:37:25PM -0500, Konrad Rzeszutek Wilk wrote:
> On Fri, Jan 18, 2013 at 01:45:27PM -0800, Greg Kroah-Hartman wrote:
> > On Fri, Jan 18, 2013 at 09:08:59PM +0000, Russell King - ARM Linux wrote:
> > > On Fri, Jan 18, 2013 at 02:24:15PM -0600, Matt Sealey wrote:
> > > > Hello all,
> > > > 
> > > > I wonder if anyone can shed some light on this linking problem I have
> > > > right now. If I configure my kernel without SMP support (it is a very
> > > > lean config for i.MX51 with device tree support only) I hit this error
> > > > on linking:
> > > 
> > > Yes, I looked at this, and I've decided that I will _not_ fix this export,
> > > neither will I accept a patch to add an export.
> > > 
> > > As far as I can see, this code is buggy in a SMP environment.  There's
> > > apparantly no guarantee that:
> > > 
> > > 1. the mapping will be created on a particular CPU.
> > > 2. the mapping will then be used only on this specific CPU.
> > > 3. no guarantee that another CPU won't speculatively prefetch from this
> > >    region.
> 
> I thought the code had per_cpu for it - so that you wouldn't do that unless
> you really went out the way to do it.

Actually, yes, you're right - that negates point (4) and possibly (2),
but (3) is still a concern.  (3) shouldn't be that much of an issue
_provided_ that the virtual addresses aren't explicitly made use of by
other CPUs.  Is that guaranteed by the zsmalloc code?  (IOW, does it
own the virtual region it places these mappings in?)

What is the performance difference between having and not having this
optimization?  Can you provide some measurements please?

Lastly, as you hold per_cpu stuff across this, that means preemption
is disabled - and any kind of scheduling is also a bug.  Is there
any reason the kmap stuff can't be used?  Has this been tried?  How
does it compare numerically with the existing solutions?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ