[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20250623165004.43394-1-sj@kernel.org>
Date: Mon, 23 Jun 2025 09:50:04 -0700
From: SeongJae Park <sj@...nel.org>
To: Joshua Hahn <joshua.hahnjy@...il.com>
Cc: SeongJae Park <sj@...nel.org>,
Bijan Tabatabai <bijan311@...il.com>,
damon@...ts.linux.dev,
linux-mm@...ck.org,
linux-kernel@...r.kernel.org,
akpm@...ux-foundation.org,
david@...hat.com,
ziy@...dia.com,
matthew.brost@...el.com,
rakie.kim@...com,
byungchul@...com,
gourry@...rry.net,
ying.huang@...ux.alibaba.com,
apopple@...dia.com,
bijantabatab@...ron.com,
venkataravis@...ron.com,
emirakhur@...ron.com,
ajayjoshi@...ron.com,
vtavarespetr@...ron.com
Subject: Re: [RFC PATCH v2 2/2] mm/damon/paddr: Allow multiple migrate targets
On Mon, 23 Jun 2025 07:08:07 -0700 Joshua Hahn <joshua.hahnjy@...il.com> wrote:
> On Sat, 21 Jun 2025 11:11:27 -0700 SeongJae Park <sj@...nel.org> wrote:
>
> > On Sat, 21 Jun 2025 11:02:15 -0700 SeongJae Park <sj@...nel.org> wrote:
> >
> > [...]
> > > I'd hence suggest to implement and use a simple weights handling mechanism
> > > here. It could be roud-robin way, like weighted interleaving, or probabilistic
> > > way, using damon_rand().
> > >
> > > The round-robin way may be simpler in my opinion. For example,
>
> [...snip...]
>
> > Actually, probabilistic way may be not that complicated. Maybe we could to
> > below here.
>
> [...snip...]
>
> > But damon_rand() might be more expensive than the roud-robin way, and arguably
> > roud-robin way is what usrs who familiar with weighted interleaving may easily
> > expect and even prefer? I have no preferrence here.
>
> Hi SJ,
>
> If you have no preference here, I would like to add some thoughts : -)
>
[...]
> I think that code complexity aside, round-robin may be the better choice for
> a few reasons. Like you mentioned, I think it is what users might be used to,
> if they are coming from weighted interleave code. Also, I think a round-robin
> way will prevent worst-case scenarios where we get a long stretch of allocations
> on the "wrong" node (but maybe this isn't a big deal, since it is so unlikely).
>
> Finaly -- If we run workloads with mempolicy wet to weighted interleave
> *and* with the weights already set, then pages will be allocated in a
> round-robin fashion. I think it may be best to try and minimize migration costs
> by trying to keep these weights in-sync. That is, if we have a 2:1 ratio,
> we will have the following allocation:
>
> node0 | oo oo oo oo oo oo oo ...
> node1 | o o o o o o ...
>
> Using a probabilistic migration, it might change the pattern:
>
> node0 | oooo oo o ooo oo ...
> node1 | oo o o o o ...
>
> That is, the ratio might be preserved, but we may be doing unnecessary
> migrations, since a probabilistic allocation isn't aware of any underlying
> patterns. With a round-robin allocation, we have a 1/total_weight chance that
> there will be no additional migrations, depending on where the round-robin
> begins. I also want to note that weighted interleave auto-tuning is written
> to minimize total_weight.
>
> I'm wondering what you think about this. Perhaps there is a way to know where
> the "beginning" of round-robin should begin, so that we try to keep the
> allocation & migration pattern as in-sync as possible? I have a suspicion
> that I am way over-thinking this, and none of this really has a tangible
> impact on performance as well ;)
The theory makes sense to me. I also not very sure how much visible difference
it will make on large scale real workloads, though. Since at least the theory
makes sense and we show no risk, I think taking the round-robin appraoch would
be a saner action, unless we find other opinions or test results.
>
> Thank you as always SJ, have a great day!!
Thank you, you too!
Thanks,
SJ
[...]
Powered by blists - more mailing lists