lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 7 Apr 2011 20:15:23 +0200
From:	Andi Kleen <andi@...stfloor.org>
To:	Linus Torvalds <torvalds@...ux-foundation.org>
Cc:	Andi Kleen <andi@...stfloor.org>, Andy Lutomirski <luto@....edu>,
	x86@...nel.org, Thomas Gleixner <tglx@...utronix.de>,
	Ingo Molnar <mingo@...e.hu>, linux-kernel@...r.kernel.org
Subject: Re: [RFT/PATCH v2 2/6] x86-64: Optimize vread_tsc's barriers

>   Instruction scheduling isn't some kind of theoretical game. It's a
> very practical issue, and CPU schedulers are constrained to do a good
> job quickly and _effectively_. In other words, instructions don't just
> schedule randomly. In the presense of the barrier, is there any reason
> to believe that the rdtsc would really schedule oddly? There is never
> any reason to _delay_ an rdtsc (it can take no cache misses or wait on
> any other resources), so when it is not able to move up, where would
> it move?

CPUs are complex beasts and I'm sure there are scheduling
constraints neither you nor me ever heard of :-)

There are always odd corner cases, e.g. if you have a correctable
error somewhere internally it may add a stall on some unit but not
on others, which may delay an arbitary uop.

Also there can be reordering against reading xtime and friends.

>  - the reason "back-to-back" (with the extreme example being in a
> tight loop) matters is that if something isn't in a tight loop, any
> jitter we see in the time counting wouldn't be visible anyway. One
> random timestamp is meaningless on its own. It's only when you have
> multiple ones that you can compare them. No?

There's also the multiple CPUs logging to a shared buffer case.

I thought Vojtech's original test case was something like that in fact.

> So _before_ we try some really clever data dependency trick with new
> inline asm and magic "double shifts to create a zero" things, I really
> would suggest just trying to remove the second lfence entirely and see
> how that works. Maybe it doesn't work, but ...

I would prefer to be safe than sorry. Also there are still other things
to optimize anyways (I suggested a few in my earlier mail) which
are 100% safe unlike this. Maybe those would be enough to offset
the cost of the "paranoid lfence"


-Andi

-- 
ak@...ux.intel.com -- Speaking for myself only.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ