[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <713b6924-a653-453e-8fde-8c966638386b@redhat.com>
Date: Thu, 13 Mar 2025 23:36:47 +0100
From: David Hildenbrand <david@...hat.com>
To: Gregory Price <gourry@...rry.net>, Rakie Kim <rakie.kim@...com>
Cc: akpm@...ux-foundation.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, linux-cxl@...r.kernel.org,
joshua.hahnjy@...il.com, dan.j.williams@...el.com,
ying.huang@...ux.alibaba.com, kernel_team@...ynix.com, honggyu.kim@...com,
yunjeong.mun@...com
Subject: Re: [PATCH v2 2/4] mm/mempolicy: Support memory hotplug in weighted
interleave
On 13.03.25 17:23, Gregory Price wrote:
> On Thu, Mar 13, 2025 at 03:33:37PM +0900, Rakie Kim wrote:
>>> I'm fairly certain this logic is wrong. If I add two memory blocks and
>>> then remove one, would this logic not remove the sysfs entries despite
>>> there being a block remaining?
>>
>> Regarding the assumption about node configuration:
>> Are you assuming that a node has two memory blocks and that
>> MEM_OFFLINE is triggered when one of them is offlined? If so, then
>> you are correct that this logic would need modification.
>>
>> I performed a simple test by offlining a single memory block:
>> # echo 0 > /sys/devices/system/node/node2/memory100/online
>>
>> In this case, MEM_OFFLINE was not triggered. However, I need to
>> conduct further analysis to confirm this behavior under different
>> conditions. I will review this in more detail and share my
>> findings, including the test methodology and results.
>>
>
> +David - might have a quick answer to this. I would have expected a
> single memory block going offline to cause a notification.
Yes. Unless offlining failed, or the block was already offline :)
If it doesn't happen for an actual online memory block that is offline
after the call, we would have a bug.
--
Cheers,
David / dhildenb
Powered by blists - more mailing lists