lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <1159489343.6241.18.camel@localhost.localdomain>
Date:	Fri, 29 Sep 2006 10:22:22 +1000
From:	Rusty Russell <rusty@...tcorp.com.au>
To:	Jeremy Fitzhardinge <jeremy@...p.org>
Cc:	Pavel Machek <pavel@....cz>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Andrew Morton <akpm@...l.org>
Subject: Re: [PATCH 0/6] Per-processor private data areas for i386

On Wed, 2006-09-27 at 13:28 -0700, Jeremy Fitzhardinge wrote:
> Pavel Machek wrote:
> > So we have 4% slowdown...
> >   
> 
> Yes, that would be the worst-case slowdown in the hot-cache case.  
> Rearranging the layout of the GDT would remove any theoretical 
> cold-cache slowdown (I haven't measured if there's any impact in practice).
>
> Rusty has also done more comprehensive benchmarks with his variant of 
> this patch series, and found no statistically interesting performance 
> difference.  Which is pretty much what I would expect, since it doesn't 
> increase cache-misses at all.

OK, here are my null-syscall results.  This is on a Intel(R) Pentium(R)
4 CPU 3.00GHz (stepping 9), single processor (SMP kernel).  

I did three sets of tests: before, with saving/restoring %gs, with using
%gs for per-cpu vars and current and smp_processor_id().

Before:
swarm5.0:Simple syscall: 0.3734 microseconds
swarm5.1:Simple syscall: 0.3734 microseconds
swarm5.2:Simple syscall: 0.3734 microseconds
swarm5.3:Simple syscall: 0.3734 microseconds

With saving/restoring %gs: (per-cpu was same)
swarm5.4:Simple syscall: 0.3801 microseconds
swarm5.5:Simple syscall: 0.3801 microseconds
swarm5.6:Simple syscall: 0.3804 microseconds
swarm5.7:Simple syscall: 0.3801 microseconds

That's a 6.5ns cost for saving and restoring gs, and other lmbench
syscall benchmarks reflected similar differences where the noise didn't
overwhelm them.

On kernbench, the differences were in the noise.

Strangely, I see a 4% drop on fork+exec when I used gs for per-cpu vars,
which I am now investigating (71.0831 usec before, 71.1725 usec with
saving, 73.7458 usec with per-cpu!).

Cheers,
Rusty.
-- 
Help! Save Australia from the worst of the DMCA: http://linux.org.au/law

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ