[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Z_yzshvBmYiPrxU0@gmail.com>
Date: Mon, 14 Apr 2025 09:05:22 +0200
From: Ingo Molnar <mingo@...nel.org>
To: Ankur Arora <ankur.a.arora@...cle.com>
Cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org, x86@...nel.org,
torvalds@...ux-foundation.org, akpm@...ux-foundation.org,
bp@...en8.de, dave.hansen@...ux.intel.com, hpa@...or.com,
mingo@...hat.com, luto@...nel.org, peterz@...radead.org,
paulmck@...nel.org, rostedt@...dmis.org, tglx@...utronix.de,
willy@...radead.org, jon.grimm@....com, bharata@....com,
raghavendra.kt@....com, boris.ostrovsky@...cle.com,
konrad.wilk@...cle.com
Subject: Re: [PATCH v3 4/4] x86/folio_zero_user: multi-page clearing
* Ankur Arora <ankur.a.arora@...cle.com> wrote:
> clear_pages_rep(), clear_pages_erms() use string instructions to zero
> memory. When operating on more than a single page, we can use these
> more effectively by explicitly advertising the region-size to the
> processor, which can use that as a hint to optimize the clearing
> (ex. by eliding cacheline allocation.)
>
> As a secondary benefit, string instructions are typically microcoded,
> and working with larger regions helps amortize the cost of the decode.
Not just the decoding, but also iterations around page-sized chunks are
not cheap these days: there's various compiler generated mitigations
and other overhead that applies on a typical kernel, and using larger
sizes amortizes that per-page-iteration setup cost.
> When zeroing the 2MB page, maximize spatial locality by clearing in
> three sections: the faulting page and its immediate neighbourhood, the
> left and the right regions, with the local neighbourhood cleared last.
s/zeroing the 2MB page
/zeroing a 2MB page
> It's not entirely clear why the performance for pg-sz=2MB improves.
> We decode fewer instructions and the hardware prefetcher can do a
> better job, but the perf stats for both of those aren't convincing
> enough to the extent of ~30%.
s/why the performance
/why performance
> For both page-sizes, Icelakex, behaves similarly to Milan pg-sz=2MB: we
> see a drop in cycles but there's no drop in cacheline allocation.
s/Icelakex, behaves similarly
/Icelakex behaves similarly
> Performance for preempt=none|voluntary remains unchanged.
CONFIG_PREEMPT_VOLUNTARY=y is the default on a number of major
distributions, such as Ubuntu, and a lot of enterprise distro kernels -
and this patch does nothing for them, for no good reason.
So could you please provide a sensible size granularity cutoff of 16MB
or so on non-preemptible kernels, instead of this weird build-time
all-or-nothing binary cutoff based on preemption modes?
On preempt=full/lazy the granularity limit would be infinite.
I.e the only code dependent on the preemption mode should be the size
cutoff/limit.
On full/lazy preemption the code would, ideally, compile to something
close to your current code.
> +obj-$(CONFIG_PREEMPTION) += memory.o
> +#ifndef CONFIG_HIGHMEM
> +/*
> + * folio_zero_user_preemptible(): multi-page clearing variant of folio_zero_user().
We don't care much about HIGHMEM these days I suppose, but this
dependency still feels wrong. Is this a stealth dependency on x86-64,
trying to avoid a new arch Kconfig for this new API, right? ;-)
Thanks,
Ingo
Powered by blists - more mailing lists