[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ab4e0907-522d-7834-03f3-014e3ed904c5@redhat.com>
Date: Wed, 10 Jun 2020 09:20:52 +0200
From: David Hildenbrand <david@...hat.com>
To: Daniel Jordan <daniel.m.jordan@...cle.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Andy Lutomirski <luto@...nel.org>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Michal Hocko <mhocko@...nel.org>,
Pavel Tatashin <pasha.tatashin@...een.com>,
Peter Zijlstra <peterz@...radead.org>,
Steven Sistare <steven.sistare@...cle.com>
Subject: Re: [PATCH v2] x86/mm: use max memory block size on bare metal
On 10.06.20 00:54, Daniel Jordan wrote:
> Some of our servers spend significant time at kernel boot initializing
> memory block sysfs directories and then creating symlinks between them
> and the corresponding nodes. The slowness happens because the machines
> get stuck with the smallest supported memory block size on x86 (128M),
> which results in 16,288 directories to cover the 2T of installed RAM.
> The search for each memory block is noticeable even with
> commit 4fb6eabf1037 ("drivers/base/memory.c: cache memory blocks in
> xarray to accelerate lookup").
>
> Commit 078eb6aa50dc ("x86/mm/memory_hotplug: determine block size based
> on the end of boot memory") chooses the block size based on alignment
> with memory end. That addresses hotplug failures in qemu guests, but
> for bare metal systems whose memory end isn't aligned to even the
> smallest size, it leaves them at 128M.
>
> Make kernels that aren't running on a hypervisor use the largest
> supported size (2G) to minimize overhead on big machines. Kernel boot
> goes 7% faster on the aforementioned servers, shaving off half a second.
>
> Signed-off-by: Daniel Jordan <daniel.m.jordan@...cle.com>
> Cc: Andrew Morton <akpm@...ux-foundation.org>
> Cc: Andy Lutomirski <luto@...nel.org>
> Cc: Dave Hansen <dave.hansen@...ux.intel.com>
> Cc: David Hildenbrand <david@...hat.com>
> Cc: Michal Hocko <mhocko@...nel.org>
> Cc: Pavel Tatashin <pasha.tatashin@...een.com>
> Cc: Peter Zijlstra <peterz@...radead.org>
> Cc: Steven Sistare <steven.sistare@...cle.com>
> Cc: linux-mm@...ck.org
> Cc: linux-kernel@...r.kernel.org
> ---
>
> Applies to 5.7 and today's mainline
>
> arch/x86/mm/init_64.c | 10 ++++++++++
> 1 file changed, 10 insertions(+)
>
> diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
> index 8b5f73f5e207c..906fbdb060748 100644
> --- a/arch/x86/mm/init_64.c
> +++ b/arch/x86/mm/init_64.c
> @@ -55,6 +55,7 @@
> #include <asm/uv/uv.h>
> #include <asm/setup.h>
> #include <asm/ftrace.h>
> +#include <asm/hypervisor.h>
>
> #include "mm_internal.h"
>
> @@ -1390,6 +1391,15 @@ static unsigned long probe_memory_block_size(void)
> goto done;
> }
>
> + /*
> + * Use max block size to minimize overhead on bare metal, where
> + * alignment for memory hotplug isn't a concern.
> + */
> + if (hypervisor_is_type(X86_HYPER_NATIVE)) {
> + bz = MAX_BLOCK_SIZE;
> + goto done;
> + }
I'd assume that bioses on physical machines >= 64GB will not align
bigger (>= 2GB) DIMMs to something < 2GB.
Acked-by: David Hildenbrand <david@...hat.com>
> +
> /* Find the largest allowed block size that aligns to memory end */
> for (bz = MAX_BLOCK_SIZE; bz > MIN_MEMORY_BLOCK_SIZE; bz >>= 1) {
> if (IS_ALIGNED(boot_mem_end, bz))
>
--
Thanks,
David / dhildenb
Powered by blists - more mailing lists