[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20251121100308.65b36af9e090a78a66144c6c@linux-foundation.org>
Date: Fri, 21 Nov 2025 10:03:08 -0800
From: Andrew Morton <akpm@...ux-foundation.org>
To: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
Cc: linux-kernel@...r.kernel.org, "Paul E. McKenney" <paulmck@...nel.org>,
Steven Rostedt <rostedt@...dmis.org>, Masami Hiramatsu
<mhiramat@...nel.org>, Dennis Zhou <dennis@...nel.org>, Tejun Heo
<tj@...nel.org>, Christoph Lameter <cl@...ux.com>, Martin Liu
<liumartin@...gle.com>, David Rientjes <rientjes@...gle.com>,
christian.koenig@....com, Shakeel Butt <shakeel.butt@...ux.dev>, SeongJae
Park <sj@...nel.org>, Michal Hocko <mhocko@...e.com>, Johannes Weiner
<hannes@...xchg.org>, Sweet Tea Dorminy <sweettea-kernel@...miny.me>,
Lorenzo Stoakes <lorenzo.stoakes@...cle.com>, "Liam R . Howlett"
<liam.howlett@...cle.com>, Mike Rapoport <rppt@...nel.org>, Suren
Baghdasaryan <surenb@...gle.com>, Vlastimil Babka <vbabka@...e.cz>,
Christian Brauner <brauner@...nel.org>, Wei Yang
<richard.weiyang@...il.com>, David Hildenbrand <david@...hat.com>, Miaohe
Lin <linmiaohe@...wei.com>, Al Viro <viro@...iv.linux.org.uk>,
linux-mm@...ck.org, linux-trace-kernel@...r.kernel.org, Yu Zhao
<yuzhao@...gle.com>, Roman Gushchin <roman.gushchin@...ux.dev>, Mateusz
Guzik <mjguzik@...il.com>, Matthew Wilcox <willy@...radead.org>, Baolin
Wang <baolin.wang@...ux.alibaba.com>, Aboorva Devarajan
<aboorvad@...ux.ibm.com>
Subject: Re: [PATCH v9 1/2] lib: Introduce hierarchical per-cpu counters
On Thu, 20 Nov 2025 16:03:53 -0500 Mathieu Desnoyers <mathieu.desnoyers@...icios.com> wrote:
> * Motivation
>
> The purpose of this hierarchical split-counter scheme is to:
>
> - Minimize contention when incrementing and decrementing counters,
> - Provide fast access to a sum approximation,
> - Provide a sum approximation with an acceptable accuracy level when
> scaling to many-core systems.
> - Provide approximate and precise comparison of two counters, and
> between a counter and a value.
>
> It aims at fixing the per-mm RSS tracking which has become too
> inaccurate for OOM killer purposes on large many-core systems [1].
Presentation nit: the info at [1] is rather well hidden until one reads
the [2/2] changelog. You might want to move that material into the
[0/N] - after all, it's the entire point of the patchset.
> * Design
>
> The hierarchical per-CPU counters propagate a sum approximation through
> a N-way tree. When reaching the batch size, the carry is propagated
> through a binary tree which consists of logN(nr_cpu_ids) levels. The
> batch size for each level is twice the batch size of the prior level.
>
> Example propagation diagram with 8 cpus through a binary tree:
>
> Level 0: 0 1 2 3 4 5 6 7
> | / | / | / | /
> | / | / | / | /
> | / | / | / | /
> Level 1: 0 1 2 3
> | / | /
> | / | /
> | / | /
> Level 2: 0 1
> | /
> | /
> | /
> Level 3: 0
>
> For a binary tree, the maximum inaccuracy is bound by:
> batch_size * log2(nr_cpus) * nr_cpus
> which evolves with O(n*log(n)) as the number of CPUs increases.
>
> For a N-way tree, the maximum inaccuracy can be pre-calculated
> based on the the N-arity of each level and the batch size.
Looks very neat.
Have you identified other parts of the kernel which could use this?
> include/linux/percpu_counter_tree.h | 239 +++++++++++++++
> init/main.c | 2 +
> lib/Makefile | 1 +
> lib/percpu_counter_tree.c | 443 ++++++++++++++++++++++++++++
> 4 files changed, 685 insertions(+)
> create mode 100644 include/linux/percpu_counter_tree.h
> create mode 100644 lib/percpu_counter_tree.c
An in-kernel test suite would be great. Like lib/*test*.c or
tools/testing/.
> diff --git a/include/linux/percpu_counter_tree.h b/include/linux/percpu_counter_tree.h
> new file mode 100644
> index 000000000000..1f4938b67730
> --- /dev/null
> +++ b/include/linux/percpu_counter_tree.h
> @@ -0,0 +1,239 @@
> +/* SPDX-License-Identifier: GPL-2.0+ OR MIT */
> +/* SPDX-FileCopyrightText: 2025 Mathieu Desnoyers <mathieu.desnoyers@...icios.com> */
> +
> +#ifndef _PERCPU_COUNTER_TREE_H
> +#define _PERCPU_COUNTER_TREE_H
> +
> +#include <linux/preempt.h>
> +#include <linux/atomic.h>
> +#include <linux/percpu.h>
> +
> +#ifdef CONFIG_SMP
> +
> +struct percpu_counter_tree_level_item {
> + atomic_t count;
> +} ____cacheline_aligned_in_smp;
> +
> +struct percpu_counter_tree {
> + /* Fast-path fields. */
> + unsigned int __percpu *level0;
> + unsigned int level0_bit_mask;
> + union {
> + unsigned int *i;
> + atomic_t *a;
> + } approx_sum;
> + int bias; /* bias for counter_set */
> +
> + /* Slow-path fields. */
> + struct percpu_counter_tree_level_item *items;
> + unsigned int batch_size;
> + unsigned int inaccuracy; /* approximation imprecise within ± inaccuracy */
> +};
I find that understanding the data structure leads to understanding the
code, so additional documentation for the various fields would be
helpful.
> +
> +static inline
> +int percpu_counter_tree_carry(int orig, int res, int inc, unsigned int bit_mask)
> +{
> ...
> +}
> +
> +static inline
> +void percpu_counter_tree_add(struct percpu_counter_tree *counter, int inc)
> +{
> ...
> +}
> +
> +static inline
> +int percpu_counter_tree_approximate_sum(struct percpu_counter_tree *counter)
> +{
> ...
> +}
These are pretty large after all the nested inlining is expanded. Are
you sure that inlining them is the correct call?
> +#else /* !CONFIG_SMP */
> +
>
> ...
>
> +#include <linux/percpu_counter_tree.h>
> +#include <linux/cpumask.h>
> +#include <linux/percpu.h>
> +#include <linux/atomic.h>
> +#include <linux/errno.h>
> +#include <linux/slab.h>
> +#include <linux/math.h>
> +
> +#define MAX_NR_LEVELS 5
> +
> +struct counter_config {
> + unsigned int nr_items;
> + unsigned char nr_levels;
> + unsigned char n_arity_order[MAX_NR_LEVELS];
> +};
> +
> +/*
> + * nr_items is the number of items in the tree for levels 1 to and
> + * including the final level (approximate sum). It excludes the level 0
> + * per-cpu counters.
> + */
That's referring to counter_config.nr_items? Comment appears to be
misplaced.
>
> ...
>
> +static
> +int __percpu_counter_tree_init(struct percpu_counter_tree *counter,
> + unsigned int batch_size, gfp_t gfp_flags,
> + unsigned int __percpu *level0,
> + struct percpu_counter_tree_level_item *items)
> +{
> + /* Batch size must be power of 2 */
> + if (!batch_size || (batch_size & (batch_size - 1)))
> + return -EINVAL;
It's a bug, yes? Worth a WARN?
> + counter->batch_size = batch_size;
> + counter->bias = 0;
> + counter->level0 = level0;
> + counter->items = items;
> + if (!nr_cpus_order) {
> + counter->approx_sum.i = per_cpu_ptr(counter->level0, 0);
> + counter->level0_bit_mask = 0;
> + } else {
> + counter->approx_sum.a = &counter->items[counter_config->nr_items - 1].count;
> + counter->level0_bit_mask = 1UL << get_count_order(batch_size);
> + }
> + counter->inaccuracy = batch_size * inaccuracy_multiplier;
> + return 0;
> +}
> +
>
> ...
>
> +int percpu_counter_tree_precise_sum(struct percpu_counter_tree *counter)
> +{
> + return percpu_counter_tree_precise_sum_unbiased(counter) + READ_ONCE(counter->bias);
> +}
> +
> +/*
> + * Do an approximate comparison of two counters.
> + * Return 0 if counters do not differ by more than the sum of their
> + * respective inaccuracy ranges,
> + * Return -1 if counter @a less than counter @b,
> + * Return 1 if counter @a is greater than counter @b.
> + */
It would be nice to kerneldocify the exported API.
Some fairly detailed words explaining the pros and cons of precise vs
approximate would be helpful to people who are using this API.
> +int percpu_counter_tree_approximate_compare(struct percpu_counter_tree *a, struct percpu_counter_tree *b)
> +{
> + int count_a = percpu_counter_tree_approximate_sum(a),
> + count_b = percpu_counter_tree_approximate_sum(b);
> +
> + if (abs(count_a - count_b) <= (a->inaccuracy + b->inaccuracy))
> + return 0;
> + if (count_a < count_b)
> + return -1;
> + return 1;
> +}
> +
>
> ...
>
> +static unsigned int __init calculate_inaccuracy_multiplier(void)
> +{
> + unsigned int nr_levels = counter_config->nr_levels, level;
> + unsigned int level_items = 1U << nr_cpus_order;
> + unsigned int inaccuracy = 0, batch_size = 1;
> +
> + for (level = 0; level < nr_levels; level++) {
> + unsigned int n_arity_order = counter_config->n_arity_order[level];
> +
> + inaccuracy += batch_size * level_items;
> + batch_size <<= n_arity_order;
> + level_items >>= n_arity_order;
> + }
> + return inaccuracy;
> +}
> +
> +int __init percpu_counter_tree_subsystem_init(void)
I'm not sure that the "subsystem_" adds any value.
> +{
> +
> + nr_cpus_order = get_count_order(nr_cpu_ids);
Stray newline.
> + if (WARN_ON_ONCE(nr_cpus_order >= ARRAY_SIZE(per_nr_cpu_order_config))) {
> + printk(KERN_ERR "Unsupported number of CPUs (%u)\n", nr_cpu_ids);
> + return -1;
> + }
> + counter_config = &per_nr_cpu_order_config[nr_cpus_order];
> + inaccuracy_multiplier = calculate_inaccuracy_multiplier();
> + return 0;
> +}
Powered by blists - more mailing lists