lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080819112013.GI9807@one.firstfloor.org>
Date:	Tue, 19 Aug 2008 13:20:13 +0200
From:	Andi Kleen <andi@...stfloor.org>
To:	Nick Piggin <nickpiggin@...oo.com.au>
Cc:	Andi Kleen <andi@...stfloor.org>, Ingo Molnar <mingo@...e.hu>,
	Jeremy Fitzhardinge <jeremy@...p.org>,
	LKML <linux-kernel@...r.kernel.org>, x86@...nel.org,
	Jens Axboe <jens.axboe@...cle.com>
Subject: Re: [PATCH 0 of 9] x86/smp function calls: convert x86 tlb flushes to use function calls [POST 2]

> 
> > AFAIK mmap flushing hasn't changed much in 2.6 and it tends
> > to batch well anyways in this case (unlike vmscan swapping). I would be
> > careful to really optimize the real culprits which are likely elsewhere.
> 
> It wasn't actually the TLB flushing side of it that was causing
> the slowdown IIRC. It's just all over the map.

The nastiest slowdowns.

> Notifier hooks; accounting statistics; 4lpt; cond_resched and
> low latency code causing functions to spill more to stack; cache
> misses from data structures increasing or becoming unaligned...

Hmm, on a benchmark here a simple anonymous mmap+munmap is ~3800 cycles.
Was it ever really that much faster? 

BTW even simple open+close is about twice as slow.
> 
> Basically just lots of little straws that added up to kill the
> camel. I didn't even get to the bottom of the whole thing. But
> my point is that even 1% here and there eventually adds up to a
> big headache for someone. 

There is a great classical email floating around how such 1% regressions
killed Irix eventually. Need to dig that out.

> inevitable to slowdown, but in all other cases we should always
> be aiming to make the kernel faster rather than slower.

It's hard to catch such regressions later. I wonder if we really need
some kind of mini benchmark collection that is regularly run
and that checks latency of such micro operation and points
out regressions when they happen.
AFAIK the OpenSolaris people have something like that.

-Andi
> 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ