lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 15 Sep 2015 15:24:06 +0200
From:	Arnd Bergmann <arnd@...db.de>
To:	linux-arm-kernel@...ts.infradead.org
Cc:	David Vrabel <david.vrabel@...rix.com>, wei.liu2@...rix.com,
	ian.campbell@...rix.com, stefano.stabellini@...citrix.com,
	linux-kernel@...r.kernel.org,
	Julien Grall <julien.grall@...rix.com>,
	xen-devel@...ts.xenproject.org, boris.ostrovsky@...cle.com,
	Roger Pau Monné <roger.pau@...rix.com>
Subject: Re: [Xen-devel] [PATCH v4 00/20] xen/arm64: Add support for 64KB page in Linux

On Tuesday 15 September 2015 14:14:09 David Vrabel wrote:
> On 14/09/15 12:32, Arnd Bergmann wrote:
> > On Monday 14 September 2015 13:04:59 Roger Pau Monné wrote:
> >>> TBH, I'm expecting a small impact to the performance. It would be hard
> >>> to get the exactly the same performance as today if we keep the helpers
> >>> to avoid the backend dealing himself with the splitting and page
> >>> granularity.
> >>>
> >>> Although, if the performance impact is not acceptable, it may be
> >>> possible to optimize gnttab_foreach_grant_in_range by moving the
> >>> function inline. The current way to the loop is the fastest I've found
> >>> (I've wrote a small program to test different way) and we will need it
> >>> when different of size will be supported.
> >>
> >> I don't expect the performance to drop massively with this patches
> >> applied, but it would be good to al least have an idea of the impact.
> > 
> > Note that using 64kb pages in Linux tends to destroy performance
> > in Linux in any case, as the memory consumption for most workloads
> > explodes. In a virtualized environment you already tend to be
> > memory constrained, so any measurement should take that into account
> > and put the extra overhead into perspective to the massive overhead
> > of running 64kb pages when RAM is tight.
> 
> If this is the case, why are some distros using 64 KiB pages then?

I believe that IBM started this on PowerPC64 when they found that it
performs better in microbenchmarks and for certain database workloads
when you do have enough memory and do not work with small files or
many processes.

I also believe that the AIX memory management code is designed differently
enough that it suffers less from large pages (or suffers more from
small pages), so the hardware designers pushed Linux to do the same as AIX.

Why you'd do it on ARM64 I have no idea at all, other than copying what
IBM did on PowerPC. If your TLB is too small, hardware designers may
of course try to push the OS to using pages as large as possible for the
low-level benchmarks to suck less, but in general, you'd be better off
with transparent hugepages and 4kb or 8kb pages for real workloads.

	Arnd
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ