[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20250312081836.665-1-rakie.kim@sk.com>
Date: Wed, 12 Mar 2025 17:18:21 +0900
From: Rakie Kim <rakie.kim@...com>
To: Gregory Price <gourry@...rry.net>
Cc: akpm@...ux-foundation.org,
linux-mm@...ck.org,
linux-kernel@...r.kernel.org,
linux-cxl@...r.kernel.org,
joshua.hahnjy@...il.com,
dan.j.williams@...el.com,
ying.huang@...ux.alibaba.com,
kernel_team@...ynix.com,
honggyu.kim@...com,
yunjeong.mun@...com,
Rakie Kim <rakie.kim@...com>
Subject: Re: [PATCH 0/4] mm/mempolicy: Add memory hotplug support in weighted interleave
On Mon, 10 Mar 2025 10:13:58 -0400 Gregory Price <gourry@...rry.net> wrote:
Hi Gregory,
I have updated version 2 of the patch series, incorporating the feedback from
you and Joshua.
However, this version does not yet include updates to the commit messages
regarding the points you previously mentioned.
Your detailed explanations have been incredibly valuable in helping us analyze
the system, and I sincerely appreciate your insights.
> 2) We need to clearly define what the weight of a node will be when
> in manual mode and a node goes (memory -> no memory -> memory)
Additionally, I will soon provide an updated document addressing this and
other points you raised in your emails.
Thank you again for your guidance and support.
Rakie
> On Mon, Mar 10, 2025 at 06:03:59PM +0900, Rakie Kim wrote:
> > On Fri, 7 Mar 2025 16:55:40 -0500 Gregory Price <gourry@...rry.net> wrote:
> > > On Fri, Mar 07, 2025 at 10:56:04AM -0500, Gregory Price wrote:
> > > >
> > > > I think the underlying issue you're dealing with is that the system is
> > > > creating more nodes for you than it should.
> > > >
> > >
> > > Looking into this for other reasons, I think you are right that multiple
> > > numa nodes can exist that cover the same memory - just different
> > > regions.
> > >
> >
> > I understand your concerns, and I agree that the most critical issue at the
> > moment is that the system is generating more nodes than necessary.
> > We need to conduct a more thorough analysis of this problem, but a detailed
> > investigation will require a significant amount of time. In this context,
> > these patches might offer a quick solution to address the issue.
> >
>
> I dug into the expected CEDT / CFMWS behaviors and had some discussions
> with Dan and Jonathan - assuming your CEDT has multiple CFMWS to cover
> the same set of devices, this is the expected behavior.
>
> https://lore.kernel.org/linux-mm/Z226PG9t-Ih7fJDL@gourry-fedora-PF4VCD3F/T/#m2780e47df7f0962a79182502afc99843bb046205
>
> Basically your BIOS is likely creating one per device and likely one
> per host bridge (to allow intra-host-bridge interleave).
>
> This puts us in an awkward state, and I need some time to consider
> whether we should expose N_POSSIBLE nodes or N_MEMORY nodes.
>
> Probably it makes sense to expose N_MEMORY nodes and allow for hidden
> state, as the annoying corner condition of a DCD coming and going
> most likely means a user wouldn't be using weighted interleave anyway.
>
> So if you can confirm what you CEDT says compared to the notes above, I
> think we can move forward with this.
>
> ~Gregory
Powered by blists - more mailing lists