[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5827baaf-0eb5-bcea-5d98-727485683512@redhat.com>
Date: Thu, 4 Jun 2020 09:22:03 +0200
From: David Hildenbrand <david@...hat.com>
To: Daniel Jordan <daniel.m.jordan@...cle.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Andy Lutomirski <luto@...nel.org>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Michal Hocko <mhocko@...nel.org>,
Pavel Tatashin <pasha.tatashin@...een.com>,
Peter Zijlstra <peterz@...radead.org>,
Steven Sistare <steven.sistare@...cle.com>
Subject: Re: [PATCH] x86/mm: use max memory block size with unaligned memory
end
On 04.06.20 05:54, Daniel Jordan wrote:
> Some of our servers spend 14 out of the 21 seconds of kernel boot
> initializing memory block sysfs directories and then creating symlinks
> between them and the corresponding nodes. The slowness happens because
> the machines get stuck with the smallest supported memory block size on
> x86 (128M), which results in 16,288 directories to cover the 2T of
> installed RAM, and each of these paths does a linear search of the
> memory blocks for every block id, with atomic ops at each step.
With 4fb6eabf1037 ("drivers/base/memory.c: cache memory blocks in xarray
to accelerate lookup") merged by Linus' today (strange, I thought this
would be long upstream) all linear searches should be gone and at least
the performance observation in this patch no longer applies.
The memmap init should nowadays consume most time.
--
Thanks,
David / dhildenb
Powered by blists - more mailing lists