lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87jz7mx75r.fsf@oracle.com>
Date: Mon, 14 Apr 2025 23:36:48 -0700
From: Ankur Arora <ankur.a.arora@...cle.com>
To: Ingo Molnar <mingo@...nel.org>
Cc: Ankur Arora <ankur.a.arora@...cle.com>, linux-kernel@...r.kernel.org,
        linux-mm@...ck.org, x86@...nel.org, torvalds@...ux-foundation.org,
        akpm@...ux-foundation.org, bp@...en8.de, dave.hansen@...ux.intel.com,
        hpa@...or.com, mingo@...hat.com, luto@...nel.org, peterz@...radead.org,
        paulmck@...nel.org, rostedt@...dmis.org, tglx@...utronix.de,
        willy@...radead.org, jon.grimm@....com, bharata@....com,
        raghavendra.kt@....com, boris.ostrovsky@...cle.com,
        konrad.wilk@...cle.com
Subject: Re: [PATCH v3 4/4] x86/folio_zero_user: multi-page clearing


Ingo Molnar <mingo@...nel.org> writes:

> * Ankur Arora <ankur.a.arora@...cle.com> wrote:
>
>> clear_pages_rep(), clear_pages_erms() use string instructions to zero
>> memory. When operating on more than a single page, we can use these
>> more effectively by explicitly advertising the region-size to the
>> processor, which can use that as a hint to optimize the clearing
>> (ex. by eliding cacheline allocation.)
>>
>> As a secondary benefit, string instructions are typically microcoded,
>> and working with larger regions helps amortize the cost of the decode.
>
> Not just the decoding, but also iterations around page-sized chunks are
> not cheap these days: there's various compiler generated mitigations
> and other overhead that applies on a typical kernel, and using larger
> sizes amortizes that per-page-iteration setup cost.

Thanks. Yeah, I was completely forgetting that even the cost of returns
has gone up in the mitigation era :D.

Is retbleed the one you were alluding to or there might be others that
would apply here as well?

>> When zeroing the 2MB page, maximize spatial locality by clearing in
>> three sections: the faulting page and its immediate neighbourhood, the
>> left and the right regions, with the local neighbourhood cleared last.
>
> s/zeroing the 2MB page
>  /zeroing a 2MB page
>
>
>> It's not entirely clear why the performance for pg-sz=2MB improves.
>> We decode fewer instructions and the hardware prefetcher can do a
>> better job, but the perf stats for both of those aren't convincing
>> enough to the extent of ~30%.
>
> s/why the performance
>  /why performance
>
>> For both page-sizes, Icelakex, behaves similarly to Milan pg-sz=2MB: we
>> see a drop in cycles but there's no drop in cacheline allocation.
>
> s/Icelakex, behaves similarly
>  /Icelakex behaves similarly

Ack to all of the above.

>> Performance for preempt=none|voluntary remains unchanged.
>
> CONFIG_PREEMPT_VOLUNTARY=y is the default on a number of major
> distributions, such as Ubuntu, and a lot of enterprise distro kernels -
> and this patch does nothing for them, for no good reason.
> So could you please provide a sensible size granularity cutoff of 16MB
> or so on non-preemptible kernels, instead of this weird build-time
> all-or-nothing binary cutoff based on preemption modes?

So, the reason for associating this with preemption modes was in part not
the difficulty of deciding a sensible granularity cutoff.

I had done a variety of chunking for an earlier version which was a bit
of a mess:
https://lore.kernel.org/lkml/20220606203725.1313715-11-ankur.a.arora@oracle.com/.

Fixed size chunking should be straight-forward enough. However, 16MB is
around 1.6ms if you zero at 10GBps. And, longer if you are on older
hardware.

> On preempt=full/lazy the granularity limit would be infinite.
>
> I.e the only code dependent on the preemption mode should be the size
> cutoff/limit.
> On full/lazy preemption the code would, ideally, compile to something
> close to your current code.

Yeah, agree.

>> +obj-$(CONFIG_PREEMPTION)	+= memory.o
>
>> +#ifndef CONFIG_HIGHMEM
>> +/*
>> + * folio_zero_user_preemptible(): multi-page clearing variant of folio_zero_user().
>
> We don't care much about HIGHMEM these days I suppose, but this
> dependency still feels wrong. Is this a stealth dependency on x86-64,
> trying to avoid a new arch Kconfig for this new API, right? ;-)

Alas nothing so crafty :). HIGHMEM means that we need to map pages in a
hugepage folio via kmap_local_page() -- so cannot treat a hugepage folio
as continguous memory and thus cannot use REP; STOS on it.

I guess the CONFIG_HIGHMEM condition clearly warrants a comment.

--
ankur

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ