lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 27 Mar 2008 12:08:34 +1100
From:	Paul Mackerras <paulus@...ba.org>
To:	Linus Torvalds <torvalds@...ux-foundation.org>
Cc:	Andi Kleen <andi@...stfloor.org>,
	David Miller <davem@...emloft.net>, clameter@....com,
	linux-mm@...ck.org, linux-kernel@...r.kernel.org,
	linux-ia64@...r.kernel.org
Subject: Re: larger default page sizes...

Linus Torvalds writes:

> On Wed, 26 Mar 2008, Paul Mackerras wrote:
> > 
> > So the improvement in the user time is almost all due to the reduced
> > TLB misses (as one would expect).  For the system time, using 64k
> > pages in the VM reduces it by about 21%, and using 64k hardware pages
> > reduces it by another 30%.  So the reduction in kernel overhead is
> > significant but not as large as the impact of reducing TLB misses.
> 
> I realize that getting the POWER people to accept that they have been 
> total morons when it comes to VM for the last three decades is hard, but 
> somebody in the POWER hardware design camp should (a) be told and (b) be 
> really ashamed of themselves.
> 
> Is this a POWER6 or what? Becasue 21% overhead from TLB handling on 
> something like gcc shows that some piece of hardware is absolute crap. 

You have misunderstood the 21% number.  That number has *nothing* to
do with hardware TLB miss handling, and everything to do with how long
the generic Linux virtual memory code spends doing its thing (page
faults, setting up and tearing down Linux page tables, etc.).  It
doesn't even have anything to do with the hash table (hardware page
table), because both cases are using 4k hardware pages.  Thus in both
cases the TLB misses and hash-table misses would have been the same.

The *only* difference between the cases is the page size that the
generic Linux virtual memory code is using.  With the 64k page size
our architecture-independent kernel code runs 21% faster.

Thus the 21% is not about the TLB or any hardware thing at all, it's
about the larger per-byte overhead of our kernel code when using the
smaller page size.

The thing you were ranting about -- hardware TLB handling overhead --
comes in at 5%, comparing 4k hardware pages to 64k hardware pages (444
seconds vs. 420 seconds user time for the kernel compile).  And yes,
it's a POWER6.

Paul.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ