[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20200715155913.plxsu7h55xif2jic@ca-dmjordan1.us.oracle.com>
Date: Wed, 15 Jul 2020 11:59:13 -0400
From: Daniel Jordan <daniel.m.jordan@...cle.com>
To: David Hildenbrand <david@...hat.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Andy Lutomirski <luto@...nel.org>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Michal Hocko <mhocko@...nel.org>,
Pavel Tatashin <pasha.tatashin@...een.com>,
Peter Zijlstra <peterz@...radead.org>,
Steven Sistare <steven.sistare@...cle.com>,
Daniel Jordan <daniel.m.jordan@...cle.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3] x86/mm: use max memory block size on bare metal
On Tue, Jul 14, 2020 at 04:54:50PM -0400, Daniel Jordan wrote:
> Some of our servers spend significant time at kernel boot initializing
> memory block sysfs directories and then creating symlinks between them
> and the corresponding nodes. The slowness happens because the machines
> get stuck with the smallest supported memory block size on x86 (128M),
> which results in 16,288 directories to cover the 2T of installed RAM.
> The search for each memory block is noticeable even with
> commit 4fb6eabf1037 ("drivers/base/memory.c: cache memory blocks in
> xarray to accelerate lookup").
>
> Commit 078eb6aa50dc ("x86/mm/memory_hotplug: determine block size based
> on the end of boot memory") chooses the block size based on alignment
> with memory end. That addresses hotplug failures in qemu guests, but
> for bare metal systems whose memory end isn't aligned to even the
> smallest size, it leaves them at 128M.
>
> Make kernels that aren't running on a hypervisor use the largest
> supported size (2G) to minimize overhead on big machines. Kernel boot
> goes 7% faster on the aforementioned servers, shaving off half a second.
>
> Signed-off-by: Daniel Jordan <daniel.m.jordan@...cle.com>
> Cc: Andrew Morton <akpm@...ux-foundation.org>
> Cc: Andy Lutomirski <luto@...nel.org>
> Cc: Dave Hansen <dave.hansen@...ux.intel.com>
> Cc: David Hildenbrand <david@...hat.com>
Darn. David, I forgot to add your ack from v2. My assumption is that it still
stands after the minor change in this version, but please do correct me if I'm
wrong.
Powered by blists - more mailing lists