lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <67bdfd38-1acf-4b90-9e34-ce752632ddb1@efficios.com>
Date: Wed, 14 Jan 2026 14:19:38 -0500
From: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
To: Michal Hocko <mhocko@...e.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>, linux-kernel@...r.kernel.org,
 "Paul E. McKenney" <paulmck@...nel.org>, Steven Rostedt
 <rostedt@...dmis.org>, Masami Hiramatsu <mhiramat@...nel.org>,
 Dennis Zhou <dennis@...nel.org>, Tejun Heo <tj@...nel.org>,
 Christoph Lameter <cl@...ux.com>, Martin Liu <liumartin@...gle.com>,
 David Rientjes <rientjes@...gle.com>, christian.koenig@....com,
 Shakeel Butt <shakeel.butt@...ux.dev>, SeongJae Park <sj@...nel.org>,
 Johannes Weiner <hannes@...xchg.org>,
 Sweet Tea Dorminy <sweettea-kernel@...miny.me>,
 Lorenzo Stoakes <lorenzo.stoakes@...cle.com>,
 "Liam R . Howlett" <liam.howlett@...cle.com>, Mike Rapoport
 <rppt@...nel.org>, Suren Baghdasaryan <surenb@...gle.com>,
 Vlastimil Babka <vbabka@...e.cz>, Christian Brauner <brauner@...nel.org>,
 Wei Yang <richard.weiyang@...il.com>, David Hildenbrand <david@...hat.com>,
 Miaohe Lin <linmiaohe@...wei.com>, Al Viro <viro@...iv.linux.org.uk>,
 linux-mm@...ck.org, linux-trace-kernel@...r.kernel.org,
 Yu Zhao <yuzhao@...gle.com>, Roman Gushchin <roman.gushchin@...ux.dev>,
 Mateusz Guzik <mjguzik@...il.com>, Matthew Wilcox <willy@...radead.org>,
 Baolin Wang <baolin.wang@...ux.alibaba.com>,
 Aboorva Devarajan <aboorvad@...ux.ibm.com>
Subject: Re: [PATCH v16 1/3] lib: Introduce hierarchical per-cpu counters

On 2026-01-14 11:41, Michal Hocko wrote:
> 
> One thing you should probably mention here is the memory consumption of
> the structure.
Good point.

The most important parts are the per-cpu counters and the tree items
which propagate the carry.

In the proposed implementation, the per-cpu counters are allocated
within per-cpu data structures, so they end up using:

   nr_possible_cpus * sizeof(unsigned long)

In addition, the tree items are appended at the end of the mm_struct.
The size of those items is defined by the per_nr_cpu_order_config
table "nr_items" field.

Each item is aligned on cacheline size (typically 64 bytes) to minimize
false sharing.

Here is the footprint for a few nr_cpus on a 64-bit arch:

nr_cpus     percpu counters (bytes)     nr_items       items size (bytes)     total (bytes)
   2                 16                     1                 64                    80
   4                 32                     3                192                   224
   8                 64                     7                448                   512
  64                 512                   21               1344                  1856
128                1024                   21               1344                  2368
256                2048                   37               2368                  4416
512                4096                   73               4672                  8768

There are of course various trade offs we can make here. We can:

* Increase the n-arity of the intermediate items to shrink the nr_items
   required for a given nr_cpus. This will increase contention of carry
   propagation across more cores.

* Remove cacheline alignment of intermediate tree items. This will
   shrink the memory needed for tree items, but will increase false
   sharing.

* Represent intermediate tree items on a byte rather than long.
   This further reduces the memory required for intermediate tree
   items, but further increases false sharing.

* Represent per-cpu counters on bytes rather than long. This makes
   the "sum" operation trickier, because it needs to iterate on the
   intermediate carry propagation nodes as well and synchronize with
   ongoing "tree add" operations. It further reduces memory use.

* Implement a custom strided allocator for intermediate items carry
   propagation bytes. This shares cachelines across different tree
   instances, keeping good locality. This ensures that all accesses
   from a given location in the machine topology touch the same
   cacheline for the various tree instances. This adds complexity,
   but provides compactness as well as minimal false-sharing.

Compared to this, the upstream percpu counters use a 32-bit integer per-cpu
(4 bytes), and accumulate within a 64-bit global value.

So yes, there is an extra memory footprint added by the current hpcc
implementation, but if it's an issue we have various options to consider
to reduce its footprint.

Is it OK if I add this discussion to the commit message, or should it
be also added into the high level design doc within
Documentation/core-api/percpu-counter-tree.rst ?

Thanks,

Mathieu

-- 
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ