lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <Z87zpg3TLRReikgu@gourry-fedora-PF4VCD3F>
Date: Mon, 10 Mar 2025 10:13:58 -0400
From: Gregory Price <gourry@...rry.net>
To: Rakie Kim <rakie.kim@...com>
Cc: akpm@...ux-foundation.org, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org, linux-cxl@...r.kernel.org,
	joshua.hahnjy@...il.com, dan.j.williams@...el.com,
	ying.huang@...ux.alibaba.com, kernel_team@...ynix.com,
	honggyu.kim@...com, yunjeong.mun@...com
Subject: Re: [PATCH 0/4] mm/mempolicy: Add memory hotplug support in weighted
 interleave

On Mon, Mar 10, 2025 at 06:03:59PM +0900, Rakie Kim wrote:
> On Fri, 7 Mar 2025 16:55:40 -0500 Gregory Price <gourry@...rry.net> wrote:
> > On Fri, Mar 07, 2025 at 10:56:04AM -0500, Gregory Price wrote:
> > > 
> > > I think the underlying issue you're dealing with is that the system is
> > > creating more nodes for you than it should.
> > > 
> > 
> > Looking into this for other reasons, I think you are right that multiple
> > numa nodes can exist that cover the same memory - just different
> > regions.
> > 
> 
> I understand your concerns, and I agree that the most critical issue at the
> moment is that the system is generating more nodes than necessary.
> We need to conduct a more thorough analysis of this problem, but a detailed
> investigation will require a significant amount of time. In this context,
> these patches might offer a quick solution to address the issue.
> 

I dug into the expected CEDT / CFMWS behaviors and had some discussions
with Dan and Jonathan - assuming your CEDT has multiple CFMWS to cover
the same set of devices, this is the expected behavior.

https://lore.kernel.org/linux-mm/Z226PG9t-Ih7fJDL@gourry-fedora-PF4VCD3F/T/#m2780e47df7f0962a79182502afc99843bb046205

Basically your BIOS is likely creating one per device and likely one
per host bridge (to allow intra-host-bridge interleave).

This puts us in an awkward state, and I need some time to consider
whether we should expose N_POSSIBLE nodes or N_MEMORY nodes.

Probably it makes sense to expose N_MEMORY nodes and allow for hidden
state, as the annoying corner condition of a DCD coming and going
most likely means a user wouldn't be using weighted interleave anyway.

So if you can confirm what you CEDT says compared to the notes above, I
think we can move forward with this.

~Gregory

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ