[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0311c2e5-aa27-f59e-cd00-0c51332b73fd@redhat.com>
Date: Wed, 10 Jun 2020 09:30:00 +0200
From: David Hildenbrand <david@...hat.com>
To: Daniel Jordan <daniel.m.jordan@...cle.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Andy Lutomirski <luto@...nel.org>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Michal Hocko <mhocko@...nel.org>,
Pavel Tatashin <pasha.tatashin@...een.com>,
Peter Zijlstra <peterz@...radead.org>,
Steven Sistare <steven.sistare@...cle.com>
Subject: Re: [PATCH v2] x86/mm: use max memory block size on bare metal
On 10.06.20 09:20, David Hildenbrand wrote:
> On 10.06.20 00:54, Daniel Jordan wrote:
>> Some of our servers spend significant time at kernel boot initializing
>> memory block sysfs directories and then creating symlinks between them
>> and the corresponding nodes. The slowness happens because the machines
>> get stuck with the smallest supported memory block size on x86 (128M),
>> which results in 16,288 directories to cover the 2T of installed RAM.
>> The search for each memory block is noticeable even with
>> commit 4fb6eabf1037 ("drivers/base/memory.c: cache memory blocks in
>> xarray to accelerate lookup").
>>
>> Commit 078eb6aa50dc ("x86/mm/memory_hotplug: determine block size based
>> on the end of boot memory") chooses the block size based on alignment
>> with memory end. That addresses hotplug failures in qemu guests, but
>> for bare metal systems whose memory end isn't aligned to even the
>> smallest size, it leaves them at 128M.
>>
>> Make kernels that aren't running on a hypervisor use the largest
>> supported size (2G) to minimize overhead on big machines. Kernel boot
>> goes 7% faster on the aforementioned servers, shaving off half a second.
>>
>> Signed-off-by: Daniel Jordan <daniel.m.jordan@...cle.com>
>> Cc: Andrew Morton <akpm@...ux-foundation.org>
>> Cc: Andy Lutomirski <luto@...nel.org>
>> Cc: Dave Hansen <dave.hansen@...ux.intel.com>
>> Cc: David Hildenbrand <david@...hat.com>
>> Cc: Michal Hocko <mhocko@...nel.org>
>> Cc: Pavel Tatashin <pasha.tatashin@...een.com>
>> Cc: Peter Zijlstra <peterz@...radead.org>
>> Cc: Steven Sistare <steven.sistare@...cle.com>
>> Cc: linux-mm@...ck.org
>> Cc: linux-kernel@...r.kernel.org
>> ---
>>
>> Applies to 5.7 and today's mainline
>>
>> arch/x86/mm/init_64.c | 10 ++++++++++
>> 1 file changed, 10 insertions(+)
>>
>> diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
>> index 8b5f73f5e207c..906fbdb060748 100644
>> --- a/arch/x86/mm/init_64.c
>> +++ b/arch/x86/mm/init_64.c
>> @@ -55,6 +55,7 @@
>> #include <asm/uv/uv.h>
>> #include <asm/setup.h>
>> #include <asm/ftrace.h>
>> +#include <asm/hypervisor.h>
>>
>> #include "mm_internal.h"
>>
>> @@ -1390,6 +1391,15 @@ static unsigned long probe_memory_block_size(void)
>> goto done;
>> }
>>
>> + /*
>> + * Use max block size to minimize overhead on bare metal, where
>> + * alignment for memory hotplug isn't a concern.
>> + */
>> + if (hypervisor_is_type(X86_HYPER_NATIVE)) {
>> + bz = MAX_BLOCK_SIZE;
>> + goto done;
>> + }
>
> I'd assume that bioses on physical machines >= 64GB will not align
> bigger (>= 2GB) DIMMs to something < 2GB.
>
> Acked-by: David Hildenbrand <david@...hat.com>
FTWT, setup_arch() does the init_hypervisor_platform() call. I assume
that should be early enough.
We should really look into factoring out memory_block_size_bytes() into
common code, turning into a simple global variable read. Then, we should
provide an interface to configure the memory block size during boot from
arch code (set_memory_block_size()).
--
Thanks,
David / dhildenb
Powered by blists - more mailing lists