lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1547d291-1512-faae-aba5-0f84c3502be4@amd.com>
Date:   Tue, 14 Feb 2023 10:25:16 +0530
From:   Bharata B Rao <bharata@....com>
To:     "Huang, Ying" <ying.huang@...el.com>
Cc:     linux-kernel@...r.kernel.org, linux-mm@...ck.org, mgorman@...e.de,
        peterz@...radead.org, mingo@...hat.com, bp@...en8.de,
        dave.hansen@...ux.intel.com, x86@...nel.org,
        akpm@...ux-foundation.org, luto@...nel.org, tglx@...utronix.de,
        yue.li@...verge.com, Ravikumar.Bangoria@....com
Subject: Re: [RFC PATCH 0/5] Memory access profiler(IBS) driven NUMA balancing

On 13-Feb-23 12:00 PM, Huang, Ying wrote:
>> I have a microbenchmark where two sets of threads bound to two 
>> NUMA nodes access the two different halves of memory which is
>> initially allocated on the 1st node.
>>
>> On a two node Zen4 system, with 64 threads in each set accessing
>> 8G of memory each from the initial allocation of 16G, I see that
>> IBS driven NUMA balancing (i,e., this patchset) takes 50% less time
>> to complete a fixed number of memory accesses. This could well
>> be the best case and real workloads/benchmarks may not get this much
>> uplift, but it does show the potential gain to be had.
> 
> Can you find a way to show the overhead of the original implementation
> and your method?  Then we can compare between them?  Because you think
> the improvement comes from the reduced overhead.

Sure, will measure the overhead.

> 
> I also have interest in the pages migration throughput per second during
> the test, because I suspect your method can migrate pages faster.

I have some data on pages migrated over time for the benchmark I mentioned
above.

                                                                                
                                Pages migrated vs Time(s)                       
    2500000 +---------------------------------------------------------------+   
            |       +       +       +       +       +       +       +       |   
            |                                               Default ******* |   
            |                                                   IBS ####### |   
            |                                                               |   
            |                                   ****************************|   
            |                                  *                            |   
    2000000 |-+                               *                           +-|   
            |                                *                              |   
            |                              **                               |   
 P          |                             *  ##                             |   
 a          |                            *###                               |   
 g          |                          **#                                  |   
 e  1500000 |-+                       *##                                 +-|   
 s          |                        ##                                     |   
            |                       #                                       |   
 m          |                      #                                        |   
 i          |                    *#                                         |   
 g          |                   *#                                          |   
 r          |                  ##                                           |   
 a  1000000 |-+               #                                           +-|   
 t          |                #                                              |   
 e          |               #*                                              |   
 d          |              #*                                               |   
            |             # *                                               |   
            |            # *                                                |   
     500000 |-+         #  *                                              +-|   
            |          #  *                                                 |   
            |         #   *                                                 |   
            |        #   *                                                  |   
            |      ##    *                                                  |   
            |     #     *                                                   |   
            |    #  +  *    +       +       +       +       +       +       |   
          0 +---------------------------------------------------------------+   
            0       20      40      60      80     100     120     140     160  
                                        Time (s)                                

So acting upon the relevant accesses early enough seem to result in
pages migrating faster in the beginning.

Here is the actual data in case the above ascii graph gets jumbled up:

numa_pages_migrated vs time in seconds
======================================

Time	Default		IBS
---------------------------
5	2639		511
10	2639		17724
15	2699		134632
20	2699		253485
25	2699		386296
30	159805		524651
35	450678		667622
40	741762		811603
45	971848		950691
50	1108475		1084537
55	1246229		1215265
60	1385920		1336521
65	1508354		1446950
70	1624068		1544890
75	1739311		1629162
80	1854639		1700068
85	1979906		1759025
90	2099857		<end>
95	2099857
100	2099857
105	2099859
110	2099859
115	2099859
120	2099859
125	2099859
130	2099859
135	2099859
140	2099859
145	2099859
150	2099859
155	2099859
160	2099859

Regards,
Bharata.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ