[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGudoHH7OUHG2HHrjzqkiqgYXzLEtovCptHpxkyVNPwSMHWfrw@mail.gmail.com>
Date: Thu, 3 Apr 2025 16:31:28 +0200
From: Mateusz Guzik <mjguzik@...il.com>
To: Kairui Song <ryncsn@...il.com>
Cc: Sweet Tea Dorminy <sweettea-kernel@...miny.me>, Andrew Morton <akpm@...ux-foundation.org>,
Steven Rostedt <rostedt@...dmis.org>, Masami Hiramatsu <mhiramat@...nel.org>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>, Dennis Zhou <dennis@...nel.org>,
Tejun Heo <tj@...nel.org>, Christoph Lameter <cl@...ux.com>, Martin Liu <liumartin@...gle.com>,
David Rientjes <rientjes@...gle.com>, Christian König <christian.koenig@....com>,
Shakeel Butt <shakeel.butt@...ux.dev>, Johannes Weiner <hannes@...xchg.org>,
Sweet Tea Dorminy <sweettea@...gle.com>, Lorenzo Stoakes <lorenzo.stoakes@...cle.com>,
"Liam R . Howlett" <Liam.Howlett@...cle.com>, Suren Baghdasaryan <surenb@...gle.com>,
Vlastimil Babka <vbabka@...e.cz>, Christian Brauner <brauner@...nel.org>,
Wei Yang <richard.weiyang@...il.com>, David Hildenbrand <david@...hat.com>,
Miaohe Lin <linmiaohe@...wei.com>, Al Viro <viro@...iv.linux.org.uk>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, linux-trace-kernel@...r.kernel.org,
Yu Zhao <yuzhao@...gle.com>, Roman Gushchin <roman.gushchin@...ux.dev>
Subject: Re: [RFC PATCH v2] mm: use per-numa-node atomics instead of percpu_counters
On Tue, Apr 1, 2025 at 5:27 AM Kairui Song <ryncsn@...il.com> wrote:
>
> On Tue, Apr 1, 2025 at 6:36 AM Sweet Tea Dorminy
> <sweettea-kernel@...miny.me> wrote:
> >
> > [Resend as requested as RFC and minus prereq-patch-id junk]
> >
> > Recently, several internal services had an RSS usage regression as part of a
> > kernel upgrade. Previously, they were on a pre-6.2 kernel and were able to
> > read RSS statistics in a backup watchdog process to monitor and decide if
> > they'd overrun their memory budget. Now, however, a representative service
> > with five threads, expected to use about a hundred MB of memory, on a 250-cpu
> > machine had memory usage tens of megabytes different from the expected amount
> > -- this constituted a significant percentage of inaccuracy, causing the
> > watchdog to act.
> >
> > This was a result of f1a7941243c1 ("mm: convert mm's rss stats into
> > percpu_counter") [1]. Previously, the memory error was bounded by
> > 64*nr_threads pages, a very livable megabyte. Now, however, as a result of
> > scheduler decisions moving the threads around the CPUs, the memory error could
> > be as large as a gigabyte.
> >
> > This is a really tremendous inaccuracy for any few-threaded program on a
> > large machine and impedes monitoring significantly. These stat counters are
> > also used to make OOM killing decisions, so this additional inaccuracy could
> > make a big difference in OOM situations -- either resulting in the wrong
> > process being killed, or in less memory being returned from an OOM-kill than
> > expected.
> >
> > Finally, while the change to percpu_counter does significantly improve the
> > accuracy over the previous per-thread error for many-threaded services, it does
> > also have performance implications - up to 12% slower for short-lived processes
> > and 9% increased system time in make test workloads [2].
> >
> > A previous attempt to address this regression by Peng Zhang [3] used a hybrid
> > approach with delayed allocation of percpu memory for rss_stats, showing
> > promising improvements of 2-4% for process operations and 6.7% for page
> > faults.
> >
> > This RFC takes a different direction by replacing percpu_counters with a
> > more efficient set of per-NUMA-node atomics. The approach:
> >
> > - Uses one atomic per node up to a bound to reduce cross-node updates.
> > - Keeps a similar batching mechanism, with a smaller batch size.
> > - Eliminates the use of a spin lock during batch updates, bounding stat
> > update latency.
> > - Reduces percpu memory usage and thus thread startup time.
> >
> > Most importantly, this bounds the total error to 32 times the number of NUMA
> > nodes, significantly smaller than previous error bounds.
> >
> > On a 112-core machine, lmbench showed comparable results before and after this
> > patch. However, on a 224 core machine, performance improvements were
> > significant over percpu_counter:
> > - Pagefault latency improved by 8.91%
> > - Process fork latency improved by 6.27%
> > - Process fork/execve latency improved by 6.06%
> > - Process fork/exit latency improved by 6.58%
> >
> > will-it-scale also showed significant improvements on these machines.
> >
> > [1] https://lore.kernel.org/all/20221024052841.3291983-1-shakeelb@google.com/
> > [2] https://lore.kernel.org/all/20230608111408.s2minsenlcjow7q3@quack3/
> > [3] https://lore.kernel.org/all/20240418142008.2775308-1-zhangpeng362@huawei.com/
>
> Hi, thanks for the idea.
>
> I'd like to mention my previous work on this:
> https://lwn.net/ml/linux-kernel/20220728204511.56348-1-ryncsn@gmail.com/
>
> Basically using one global percpu counter instead of a per-task one, and
> flush each CPU's sub-counter on context_switch (if next->active_mm !=
> current->active_mm, no switch for IRQ or kthread).
> More like a percpu stash.
>
> Benchmark looks great and the fast path is super fast (just a
> this_cpu_add). context_switch is also fine because the scheduler would
> try to keep one task on the same CPU to make better use of cache. And
> it can leverage the cpu bitmap like tlb shootdown to optimize the
> whole thing.
>
> The error and total memory consumption are both lower than current design too.
Note there are 2 unrelated components in that patchset:
- one per-cpu instance of rss counters which is rolled up on context
switches, avoiding the costly counter alloc/free on mm
creation/teardown
- cpu iteration in get_mm_counter
The allocation problem is fixable without abandoning the counters, see
my other e -mail (tl;dr let mm's hanging out in slab caches *keep* the
counters). This aspect has to be solved anyway due to mm_alloc_cid().
Providing a way to sort it out covers *both* the rss counters and the
cid thing.
In your patchset the accuracy increase comes at the expense of walking
all CPUs every time, while a big part of the point of using percpu
counters is to have a good enough approximation somewhere that this is
not necessary.
Indeed the stock kernel fails to achieve that at the moment and as you
can see there is discussion how to tackle it. It is a general percpu
counter problem.
I verified get_mm_counter is issued in particular on mmap and munmap.
On high core count boxes (hundreds of cores) the mandatory all CPU
walk has to be a problem, especially if a given process is also highly
multi-threaded and mmap/munmap heavy.
Thus I think your patchset would also benefit from some form of
distribution of the counter other than just per-cpu and the one
centralized value. At the same time if RSS accuracy is your only
concern and you don't care about walking the CPUs, then you could
modify the current code to also do it.
Or to put it differently, while it may be changing the scheme to have
a local copy makes sense, the patchset is definitely not committable
in the proposed form -- it really wants to have better quality caching
of the state.
--
Mateusz Guzik <mjguzik gmail.com>
Powered by blists - more mailing lists