[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d4c0e8c0-877e-8278-acb1-2fcd43ed9325@amd.com>
Date: Fri, 24 Feb 2023 08:58:36 +0530
From: Bharata B Rao <bharata@....com>
To: "Huang, Ying" <ying.huang@...el.com>
Cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org, mgorman@...e.de,
peterz@...radead.org, mingo@...hat.com, bp@...en8.de,
dave.hansen@...ux.intel.com, x86@...nel.org,
akpm@...ux-foundation.org, luto@...nel.org, tglx@...utronix.de,
yue.li@...verge.com, Ravikumar.Bangoria@....com
Subject: Re: [RFC PATCH 0/5] Memory access profiler(IBS) driven NUMA balancing
On 15-Feb-23 11:37 AM, Huang, Ying wrote:
> Bharata B Rao <bharata@....com> writes:
>
>> On 13-Feb-23 12:00 PM, Huang, Ying wrote:
>>>> I have a microbenchmark where two sets of threads bound to two
>>>> NUMA nodes access the two different halves of memory which is
>>>> initially allocated on the 1st node.
>>>>
>>>> On a two node Zen4 system, with 64 threads in each set accessing
>>>> 8G of memory each from the initial allocation of 16G, I see that
>>>> IBS driven NUMA balancing (i,e., this patchset) takes 50% less time
>>>> to complete a fixed number of memory accesses. This could well
>>>> be the best case and real workloads/benchmarks may not get this much
>>>> uplift, but it does show the potential gain to be had.
>>>
>>> Can you find a way to show the overhead of the original implementation
>>> and your method? Then we can compare between them? Because you think
>>> the improvement comes from the reduced overhead.
>>
>> Sure, will measure the overhead.
>>
>>>
>>> I also have interest in the pages migration throughput per second during
>>> the test, because I suspect your method can migrate pages faster.
>>
>> I have some data on pages migrated over time for the benchmark I mentioned
>> above.
>>
>>
>> Pages migrated vs Time(s)
>> 2500000 +---------------------------------------------------------------+
>> | + + + + + + + |
>> | Default ******* |
>> | IBS ####### |
>> | |
>> | ****************************|
>> | * |
>> 2000000 |-+ * +-|
>> | * |
>> | ** |
>> P | * ## |
>> a | *### |
>> g | **# |
>> e 1500000 |-+ *## +-|
>> s | ## |
>> | # |
>> m | # |
>> i | *# |
>> g | *# |
>> r | ## |
>> a 1000000 |-+ # +-|
>> t | # |
>> e | #* |
>> d | #* |
>> | # * |
>> | # * |
>> 500000 |-+ # * +-|
>> | # * |
>> | # * |
>> | # * |
>> | ## * |
>> | # * |
>> | # + * + + + + + + |
>> 0 +---------------------------------------------------------------+
>> 0 20 40 60 80 100 120 140 160
>> Time (s)
>>
>> So acting upon the relevant accesses early enough seem to result in
>> pages migrating faster in the beginning.
>
> One way to prove this is to output the benchmark performance
> periodically. So we can find how the benchmark score change over time.
Here is the data from a different run that captures the benchmark scores
periodically. The benchmark touches a fixed amount of memory a fixed number
of times iteratively. I am capturing the iteration number for one of the
threads that starts touching memory which is completely remote at the
beginning. The higher iteration number suggests that the thread is making
progress quickly which eventually reflects as the benchmark score.
.
Access iterations vs Time
500 +-------------------------------------------------------------------+
| + + + + + + + + * |
| Default ******* |
450 |-+ # IBS #######-|
| # * |
| # * |
| # * |
400 |-+ # * +-|
| # * |
A | ****#********************************************* |
c 350 |-+ * # +-|
c | * # |
e | * # |
s 300 |-+ * # +-|
s | * # |
| * # |
i 250 |-+ * # +-|
t | * # |
e | * # |
r | * # |
a 200 |-+ * # +-|
t | *# |
i | * # |
o 150 |-+ *# +-|
n | *# |
s | *# |
100 |-+ *# +-|
| # |
| # |
| # |
50 |-# +-|
|# |
|# + + + + + + + + |
0 +-------------------------------------------------------------------+
0 20 40 60 80 100 120 140 160 180
Time (s)
The way the number of migrated pages varies for the above runs is shown in
the below graph:
.
Pages migrated vs Time
2500000 +---------------------------------------------------------------+
| + + + + + + + + + |
| Default ******* |
| IBS ####### |
| |
| ******** |
| * |
2000000 |-+ ** +-|
| *** |
| ** |
p | * |
a | ** |
g | ** |
e 1500000 |-+ * +-|
s | *** |
| ** |
m | ** |
i | * |
g | ** |
r | * |
a 1000000 |-+ * +-|
t | * |
e | * |
d | * |
| * |
| ##* |
500000 |-+ # * +-|
| ## * |
| ## * |
| ### * |
| # * |
| #### * |
| # + * + + + + + + + |
0 +---------------------------------------------------------------+
0 20 40 60 80 100 120 140 160 180 200
Time (s)
The final benchmark scores for the above runs compare like this:
Default IBS
Time (us) 174459192.0 54710778.0
Regards,
Bharata.
Powered by blists - more mailing lists