lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aV9IspAnCCM4ukE8@google.com>
Date: Thu, 8 Jan 2026 06:03:30 +0000
From: Bing Jiao <bingjiao@...gle.com>
To: Joshua Hahn <joshua.hahnjy@...il.com>
Cc: linux-mm@...ck.org, Andrew Morton <akpm@...ux-foundation.org>,
	Johannes Weiner <hannes@...xchg.org>,
	David Hildenbrand <david@...nel.org>,
	Michal Hocko <mhocko@...nel.org>,
	Qi Zheng <zhengqi.arch@...edance.com>,
	Shakeel Butt <shakeel.butt@...ux.dev>,
	Lorenzo Stoakes <lorenzo.stoakes@...cle.com>,
	Axel Rasmussen <axelrasmussen@...gle.com>,
	Yuanchu Xie <yuanchu@...gle.com>, Wei Xu <weixugc@...gle.com>,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH v1 0/2] mm/vmscan: optimize preferred target demotion
 node selection

On Wed, Jan 07, 2026 at 09:46:52AM -0800, Joshua Hahn wrote:

Hi Joshua,

Thanks for your insights and valuable suggestions!

> On Wed,  7 Jan 2026 07:28:12 +0000 Bing Jiao <bingjiao@...gle.com> wrote:
>
> Hello Bing, thank you for your patch!
>
> I have a few questions about the motivation about this patch.
>
> > In tiered memory systems, the demotion aims to move cold folios to the
> > far-tier nodes. To maintain system performance, the demotion target
> > should ideally be the node with the shortest NUMA distance from the
> > source node.
> >
> > However, the current implementation has two suboptimal behaviors:
> >
> > 1. Unbalanced Fallback: When the primary preferred demotion node is full,
> >    the allocator falls back to other nodes in a way that often skews
> >    toward zones that closer to the primary preferred node rather than
> >    distributing the load evenly across fallback nodes.
>
> I definitely think this is a problem that can exist for some workloads /
> machines, and I agree that there should be some mechanism to manage this
> in the demotion code as well. In the context of tiered memory, it might be
> the case that some far-nodes have more restrited memory bandwidth, so better
> distribution of memory across those nodes definitely sounds like something
> that should at least be considered (even if it might not be the sole factor).
>
> With that said, I think adding some numbers here to motivate this change could
> definitely make the argument more convincing. In particular, I don't think
> I am fully convinced that doing a full random selection from the demotion
> targets makes the most sense. Maybe there are a few more things to consider,
> like the node's capacity, how full it is, bandwidth, etc. For instance,
> weighted interleave auto-tuning makes a weighted selection based on each
> node's bandwidth.

I agree that a detailed evaluation is necessary. When I initially wrote
this patch, I hadn't fully considered a weighted selection. Using
bandwidth as a weight for demotion target selection makes sense,
and node capacity could serve as another useful heuristic.
However, designing and evaluating a proposal that integrates all
these metrics properly will require more time and study.

> At least right now, it seems like we're consistent with how the demotion node
> gets selected when the preferred node is full.
>
> Do your changes lead to a "better" distribution of memory? And does this
> distribution lead to increased performance? I think some numbers here could
> help my understanding and convince others as well : -)

I haven't performed a formal A/B performance test yet. My primary
observation was a significant imbalance in memory pressure: some far
nodes were completely exhausted while others in the same tier remained
half-empty. With this patch, that skewed distribution is mitigated when
nodes reside in the same tier. I agree that providing numbers would
strengthen the proposal. I will work on gathering those numbers later.

> > 2. Suboptimal Target Selection: demote_folio_list() randomly select
> >    a preferred node from the allowed mask, potentially selecting
> >    a very distant node.
>
> Following up, I think it could be helpful to have a unified story about how
> demotion nodes should be selected. In particular, I'm not entirely confident
> if it makes sense to have a "try on the preferred demotion target, and then
> select randomly among all other nodes" story, since these have conflicting
> stories of "prefer close nodes" vs "distribute demotions". To put it explicitly,
> what makes the first demotion target special? Should we just select randomly
> for *all* demotion targets, not just if the preferred node is full?

The "first" target is not particularly special. It is randomly
selected from the tier closest to the source node by
next_demotion_node().

Regarding the strategy, what I am thinking: if far nodes are mostly empty,
preferring the nearest one is optimal. However, as those nodes reach
capacity, consistently targeting the nearest one can create contention
hotspots.

Choosing between "proximity" and "distribution" likely depends on the
current state of the targets. I agree that we need a more comprehensive
study to establish a unified selection policy.

> Sorry if it seems like I am asking too many questions, I just wanted to get
> a better understanding of the motivation behind the patch.
>
> Thank you, and I hope you have a great day!
> Joshua

Thanks for the feedback and suggestions. I realized that my previous
patch ("mm/vmscan: fix demotion targets checks in reclaim/demotion")
is what introduced the "non-preferred node" issue in demote_folio_list().
I am not sure whether it should be in the previous patch series;
but I just posted a refreshed version of Patch 2/2 in the previous series.

Thanks,
Bing


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ