[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <877bv6i5ts.fsf@mailhost.krisman.be>
Date: Mon, 01 Dec 2025 10:23:43 -0500
From: Gabriel Krisman Bertazi <krisman@...e.de>
To: Mateusz Guzik <mjguzik@...il.com>
Cc: Jan Kara <jack@...e.cz>, Mathieu Desnoyers
<mathieu.desnoyers@...icios.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, Shakeel Butt <shakeel.butt@...ux.dev>,
Michal Hocko <mhocko@...nel.org>, Dennis Zhou <dennis@...nel.org>,
Tejun Heo <tj@...nel.org>, Christoph Lameter <cl@...two.org>, Andrew
Morton <akpm@...ux-foundation.org>, David Hildenbrand <david@...hat.com>,
Lorenzo Stoakes <lorenzo.stoakes@...cle.com>, "Liam R. Howlett"
<Liam.Howlett@...cle.com>, Vlastimil Babka <vbabka@...e.cz>, Mike
Rapoport <rppt@...nel.org>, Suren Baghdasaryan <surenb@...gle.com>,
Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [RFC PATCH 0/4] Optimize rss_stat initialization/teardown for
single-threaded tasks
Mateusz Guzik <mjguzik@...il.com> writes:
> On Fri, Nov 28, 2025 at 9:10 PM Jan Kara <jack@...e.cz> wrote:
>> On Fri 28-11-25 08:30:08, Mathieu Desnoyers wrote:
>> > What would really reduce memory allocation overhead on fork
>> > is to move all those fields into a top level
>> > "struct mm_percpu_struct" as a first step. This would
>> > merge 3 per-cpu allocations into one when forking a new
>> > task.
>> >
>> > Then the second step is to create a mm_percpu_struct
>> > cache to bypass the per-cpu allocator.
>> >
>> > I suspect that by doing just that we'd get most of the
>> > performance benefits provided by the single-threaded special-case
>> > proposed here.
>>
>> I don't think so. Because in the profiles I have been doing for these
>> loads the biggest cost wasn't actually the per-cpu allocation itself but
>> the cost of zeroing the allocated counter for many CPUs (and then the
>> counter summarization on exit) and you're not going to get rid of that with
>> just reshuffling per-cpu fields and adding slab allocator in front.
>>
Hi Mateusz,
> The major claims (by me anyway) are:
> 1. single-threaded operation for fork + exec suffers avoidable
> overhead even without the rss counter problem, which are tractable
> with the same kind of thing which would sort out the multi-threaded
> problem
Agreed, there are more issues in the fork/exec path than just the
rss_stat. The rss_stat performance is particularly relevant to us,
though, because it is a clear regression for single-threaded introduced
in 6.2.
I took the time to test the slab constructor approach with the
/sbin/true microbenchmark. I've seen only 2% gain on that tight loop in
the 80c machine, which, granted, is an artificial benchmark, but still a
good stressor of the single-threaded case. With this patchset, I
reported 6% improvement, getting it close to the performance before the
pcpu rss_stats introduction. This is expected, as avoiding the pcpu
allocation and initialization all together for the single-threaded case,
where it is not necessary, will always be better than speeding up the
allocation (even though that a worthwhile effort itself, as Mathieu
pointed out).
> 2. unfortunately there is an increasing number of multi-threaded (and
> often short lived) processes (example: lld, the linker form the llvm
> project; more broadly plenty of things Rust where people think
> threading == performance)
I don't agree with this argument, though. Sure, there is an increasing
amount of multi-threaded applications, but this is not relevant. The
relevant argument is the amount of single-threaded workloads. One
example are coreutils, which are spawned to death by scripts. I did
take the care of testing the patchset with a full distro on my
day-to-day laptop and I wasn't surprised to see the vast majority of
forked tasks never fork a second thread. The ones that do are most
often long-lived applications, where the cost of mm initialization is
way less relevant to the overall system performance. Another example is
the fact real-world benchmarks, like kernbench, can be improved with
special-casing single-threads.
> The pragmatic way forward (as I see it anyway) is to fix up the
> multi-threaded thing and see if trying to special case for
> single-threaded case is justifiable afterwards.
>
> Given that the current patchset has to resort to atomics in certain
> cases, there is some error-pronnes and runtime overhead associated
> with it going beyond merely checking if the process is
> single-threaded, which puts an additional question mark on it.
I don't get why atomics would make it error-prone. But, regarding the
runtime overhead, please note the main point of this approach is that
the hot path can be handled with a simple non-atomic variable write in
the task context, and not the atomic operation. The later is only used
for infrequent case where the counter is touched by an external task
such as OOM, khugepaged, etc.
>
> Now to business:
--
Gabriel Krisman Bertazi
Powered by blists - more mailing lists