lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sat, 28 Feb 2009 09:29:22 +0100
From:	Ingo Molnar <mingo@...e.hu>
To:	Nick Piggin <nickpiggin@...oo.com.au>
Cc:	Linus Torvalds <torvalds@...ux-foundation.org>,
	Salman Qazi <sqazi@...gle.com>, davem@...emloft.net,
	linux-kernel@...r.kernel.org, Thomas Gleixner <tglx@...utronix.de>,
	"H. Peter Anvin" <hpa@...or.com>, Andi Kleen <andi@...stfloor.org>
Subject: Re: [patch] x86, mm: pass in 'total' to __copy_from_user_*nocache()


* Nick Piggin <nickpiggin@...oo.com.au> wrote:

> On Thursday 26 February 2009 03:04:22 Linus Torvalds wrote:
> > On Wed, 25 Feb 2009, Ingo Molnar wrote:
> > > The main artifact would be the unaligned edges around a bigger
> > > write. In particular the tail portion of a big write will be
> > > cached.
> >
> > .. but I don't really agree that this is a problem.
> >
> > Sure, it's "wrong", but does it actually matter? No. Is it worth adding
> > complexity to existing interfaces for? I think not.
> >
> > In general, I think that software should not mess with nontemporal stores.
> > The thing is, software almost never knows enough about the CPU cache to
> > make an intelligent choice.
> >
> > So I didn't want to apply the nocache patches in the first place, but the
> > performance numbers were pretty clear. I'll take "real numbers" over my
> > personal dislikes any day. But now we have real numbers going the other
> > way for small writes, and a patch to fix that.
> >
> > But we have no amount of real numbers for the edge cases, and I don't
> > think they matter. In fact, I don't think they _can_ matter, because it is
> > inevitably always going to be an issue of "which CPU and which memory
> > subsystem".
> >
> > In other words, there is no "right" answer. There is no "perfect". But
> > there is "we can fix the real numbers".
> 
> Well... these are "real" benchmark numbers. Where the benchmark is
> actually apparently performing an access pattern that seemingly
> should favour nontemporal stores (the numbers are just measuring the
> phase were write(2) is being done).
> 
> 
> > At the same time, we also do know:
> >  - caches work
> >  - CPU designers will continue to worry about the normal (cached) case,
> >    and will do reasonable things with cache replacement.
> >  - ergo: w should always consider the cached case to be the _normal_ mode,
> >    and it's the nontempral loads/stores that need to explain themselves.
> >
> > So I do think we should just apply the simple patch. Not make a big deal
> > out of it. We have numbers. We use cached memory copies for everything
> > else. It's always "safe".
> >
> > And we pretty much know that the only time we will ever really care about
> > the nontemporal case is with big writes - where the "edge effects"
> > essentially become total noise.
> 
> I guess so. I wouldn't mind just doing cached stores all the 
> time for the reasons you say.
> 
> But whatever. If it ever becomes *really* important, I guess 
> we can flag this kind of behaviour from userspace.

Important question: is there a standing NAK for the 'total' 
parameter addition patch i did? You requested it and Linus didnt 
like it ... and i've measured it and it's just a single 
instruction in the whole kernel so it did not seem to be too bad 
to me.

It might be wrong on the principle though, so will revert it if 
needed, before it spreads into too many topics.

	Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ