[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4e8067e1-0574-c9d2-9d6c-d676d32071bd@linux.vnet.ibm.com>
Date: Fri, 25 Feb 2022 05:07:15 +0530
From: Abhishek Goel <huntbag@...ux.vnet.ibm.com>
To: Dave Hansen <dave.hansen@...el.com>,
Dave Hansen <dave.hansen@...ux.intel.com>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Zi Yan <ziy@...dia.com>, David Hildenbrand <david@...hat.com>,
Yang Shi <yang.shi@...ux.alibaba.com>,
Huang Ying <ying.huang@...el.com>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH -V11 2/9] mm/migrate: update node demotion order on
hotplug events
On 24/02/22 05:35, Dave Hansen wrote:
> On 2/23/22 15:02, Abhishek Goel wrote:
>> If needed, I will provide experiment results and traces that were used
>> to conclude this.
> It would be great if you can provide some more info. Even just a CPU
> time profile would be helpful.
Average total time taken for SMT=8 to SMT=1 in v5.14 : 20s
Average total time taken for SMT=8 to SMT=1 in v5.15 : 36s
(Observed in system with 150+ CPUs )
>
> It would also be great to understand more about what "hotplug on power
> systems" actually means. Is this a synthetic benchmark, or are actual
> end-users running into this issue? Are entire nodes of CPUs going
> offline? Or is this just doing an offline/online of CPU 22 in a 100-CPU
> NUMA node?
No, this is not a synthetic benchmark. This can be recreated with
entire nodes of CPUs going offline. And the online/offline operations
have been performed by simple scripts. The time observed can also be
verified (for individual CPU or the entire system) by observing CPU-
Hotplug trace which provide consistent result as observed by using
the scripts.
Powered by blists - more mailing lists