[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f8a1fc19-91bb-7f85-301f-6a68ea22b594@intel.com>
Date: Tue, 9 Mar 2021 10:50:51 -0800
From: Dave Hansen <dave.hansen@...el.com>
To: Oscar Salvador <osalvador@...e.de>,
Andrew Morton <akpm@...ux-foundation.org>
Cc: David Hildenbrand <david@...hat.com>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Andy Lutomirski <luto@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
x86@...nel.org, "H . Peter Anvin" <hpa@...or.com>,
Michal Hocko <mhocko@...nel.org>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v5 4/4] x86/vmemmap: Optimize for consecutive sections in
partial populated PMDs
On 3/9/21 9:41 AM, Oscar Salvador wrote:
> We can optimize in the case we are adding consecutive sections, so no
> memset(PAGE_UNUSED) is needed.
> In that case, let us keep track where the unused range of the previous
> memory range begins, so we can compare it with start of the range to be
> added.
> If they are equal, we know sections are added consecutively.
>
> For that purpose, let us introduce 'unused_pmd_start', which always holds
> the beginning of the unused memory range.
>
> In the case a section does not contiguously follow the previous one, we
> know we can memset [unused_pmd_start, PMD_BOUNDARY) with PAGE_UNUSE.
>
> This patch is based on a similar patch by David Hildenbrand:
>
> https://lore.kernel.org/linux-mm/20200722094558.9828-10-david@redhat.com/
>
> Signed-off-by: Oscar Salvador <osalvador@...e.de>
This is much more clear now. Thanks!
Acked-by: Dave Hansen <dave.hansen@...ux.intel.com>
Powered by blists - more mailing lists