lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b0dff0d9-c18c-4465-90a5-54a8c28fe40c@amd.com>
Date:   Tue, 2 Aug 2022 10:10:10 +0530
From:   K Prateek Nayak <kprateek.nayak@....com>
To:     Libo Chen <libo.chen@...cle.com>, peterz@...radead.org,
        vincent.guittot@...aro.org, mgorman@...e.de,
        tim.c.chen@...ux.intel.com, 21cnbao@...il.com,
        dietmar.eggemann@....com
Cc:     linux-kernel@...r.kernel.org, tglx@...utronix.de
Subject: Re: [PATCH] sched/fair: no sync wakeup from interrupt context

Hello Libo,

Thank you for looking into this.

On 8/1/2022 8:27 PM, Libo Chen wrote:
> 
> 
> On 7/28/22 21:47, K Prateek Nayak wrote:
>> Hello Libo and Peter,
>>
>> tl;dr
>>
>> - We observed a major regression with tbench when testing the latest tip
>>    sched/core at:
>>    commit 14b3f2d9ee8d "sched/fair: Disallow sync wakeup from interrupt context"
>>    Reason for the regression are the fewer affine wakeups that leaves the
>>    client farther away from the data it needs to consume next primed in the
>>    waker's LLC.
>>    Such regressions can be expected from tasks that use sockets to communicate
>>    significant amount of data especially on system with multiple LLCs.
>>
>> - Other benchmarks have a comparable behavior to the tip at previous commit
>>    commit : 91caa5ae2424 "sched/core: Fix the bug that task won't enqueue
>>    into core tree when update cookie"
>>
>> I'll leave more details below.
>>
>> On 7/12/2022 4:17 AM, Libo Chen wrote:
>>> [..snip..]
>>
>> The two tests kernels used are:
>>
>> - tip at commit: 14b3f2d9ee8d "sched/fair: Disallow sync wakeup from interrupt context"
>> - tip at commit: 91caa5ae2424 "sched/core: Fix the bug that task won't enqueue into core tree when update cookie"
>>
>> Below are the tbench result on a dual socket Zen3 machine
>> running in NPS1 mode. Following is the NUMA configuration
>> NPS1 mode:
>>
>> - NPS1: Each socket is a NUMA node.
>>    Total 2 NUMA nodes in the dual socket machine.
>>
>>    Node 0: 0-63,   128-191
>>    Node 1: 64-127, 192-255
>>
>> Clients: tip (91caa5ae2424)      tip (14b3f2d9ee8d)
>>      1    569.24 (0.00 pct)       283.63 (-50.17 pct)    *
>>      2    1104.76 (0.00 pct)      590.45 (-46.55 pct)    *
>>      4    2058.78 (0.00 pct)      1080.63 (-47.51 pct)   *
>>      8    3590.20 (0.00 pct)      2098.05 (-41.56 pct)   *
>>     16    6119.21 (0.00 pct)      4348.40 (-28.93 pct)   *
>>     32    11383.91 (0.00 pct)     8417.55 (-26.05 pct)   *
>>     64    21910.01 (0.00 pct)     19405.11 (-11.43 pct)  *
>>    128    33105.27 (0.00 pct)     29791.80 (-10.00 pct)  *
>>    256    45550.99 (0.00 pct)     45847.10 (0.65 pct)
>>    512    57753.81 (0.00 pct)     49481.17 (-14.32 pct)  *
>>   1024    55684.33 (0.00 pct)     48748.38 (-12.45 pct)  *
>>
>> This regression is consistently reproducible.
> I ran tbench with 1 client on my 8 nodes zen2 machine because 1~4 clients count generally shouldn't be affected by this patch. I do see throughput regresses with the patch but
> the latency improves pretty much equally. Furthermore, I also don't see tbench tasks being separated in different LLC domains from my ftrace, they are almost always in the same CCXes.
> What I do see is there are a lot less interrupts and context switches, and average CPU frequency is slower too with the patch. This is bizarre that Intel doesn't seem to be impacted.
> Trying to understand why right now.

Thank you for analyzing this. I see a drop in max latency with the patch but the
average latency have increased on the patched kernel. Following are the logs from
one of the runs for 1 client case:

- tip (91caa5ae2424)

 Operation                Count    AvgLat    MaxLat
 --------------------------------------------------
 Deltree                     28     0.000     0.001
 Flush                    76361     0.008     0.018
 Close                   800338     0.008     0.080
 LockX                     3546     0.008     0.015
 Mkdir                       14     0.008     0.009
 Rename                   46131     0.008     0.050
 ReadX                  1707761     0.009     0.139
 WriteX                  543274     0.012     0.092
 Unlink                  220019     0.008     0.083
 UnlockX                   3546     0.008     0.016
 FIND_FIRST              381795     0.008     0.079
 SET_FILE_INFORMATION     88740     0.008     0.080
 QUERY_FILE_INFORMATION  173062     0.008     0.061
 QUERY_PATH_INFORMATION  987524     0.008     0.070
 QUERY_FS_INFORMATION    181068     0.008     0.049
 NTCreateX              1089543     0.008     0.083

Throughput 570.36 MB/sec  1 clients  1 procs  max_latency=0.140 ms

- tip (14b3f2d9ee8d)

Operation                Count    AvgLat    MaxLat
 --------------------------------------------------
 Deltree                     14     0.000     0.001
 Flush                    38993     0.017     0.059
 Close                   408696     0.017     0.085
 LockX                     1810     0.017     0.023
 Mkdir                        7     0.016     0.017
 Rename                   23555     0.017     0.052
 ReadX                   871996     0.018     0.097
 WriteX                  277343     0.025     0.105
 Unlink                  112357     0.017     0.055
 UnlockX                   1810     0.017     0.023
 FIND_FIRST              194961     0.017     0.089
 SET_FILE_INFORMATION     45312     0.017     0.032
 QUERY_FILE_INFORMATION   88356     0.017     0.078
 QUERY_PATH_INFORMATION  504289     0.017     0.119
 QUERY_FS_INFORMATION     92460     0.017     0.085
 NTCreateX               556374     0.017     0.097

Throughput 291.163 MB/sec  1 clients  1 procs  max_latency=0.119 ms

I had only analyzed the schedstat data which showed a clear shift
in the number of affine wakeups. I haven't measured the average CPU
frequency during the runs. The numbers reported are with the performance
governor. I'll try to get more data on the CPU frequency front.

>> Below are the statistics gathered from schedstat data:
>>
>> Kernel                                                     :        tip + remove 14b3f2d9ee8d                    tip
>> HEAD commit                                                :             91caa5ae2424                       14b3f2d9ee8d
>> sched_yield cnt                                            :                   11                                 17
>> Legacy counter can be ignored                              :                    0                                  0
>> schedule called                                            :             12621212                           15104022
>> schedule left the processor idle                           :              6306653 ( 49.96% of times )        7547375       ( 49.96% of times )
>> try_to_wake_up was called                                  :              6310778                            7552147
>> try_to_wake_up was called to wake up the local cpu         :                12305 ( 0.19% of times )           12816       ( 0.16% of times )
>> total time by tasks on this processor (in jiffies)         :          78497712520                        72461915902
>> total time waiting tasks on this processor (in jiffies)    :             56398803 ( 0.07% of times )        34631178       ( 0.04% of times )
>> total timeslices run on this cpu                           :              6314548                            7556630
>>
>> Wakeups on same                                    SMT     :                   39 ( 0.00062 )                    263 ( 0.00348 )
>> Wakeups on same                                    MC      :              6297015 ( 99.78% of time ) <---       1079 ( 0.01429 )
>> Wakeups on same                                    DIE     :                  559 ( 0.00886 )                7537909 ( 99.81147 ) <--- With the patch, the task will prefer
> I don't have a zen3 right now. What is the span of your MC domain as well as DIE?

on Zen3, a group in MC domain consists of the 16 CPUs on the same CCD.
On a dual socket Zen3 system (2 x 64C/128T) running in NPS 1 mode,
the DIE domain will consists of all the CPUs on the same socket. There are two
DIE groups in the dual socket test system. Following are the span of each:

- DIE0: 0-63,128-191

    DIE 0 MC 0: 0-7,128-135
    DIE 0 MC 1: 8-15,136-143
    DIE 0 MC 2: 16-23,144-151
    DIE 0 MC 3: 24-31,152-159
    DIE 0 MC 4: 32-39,160-167
    DIE 0 MC 5: 40-47,168-175
    DIE 0 MC 6: 48-55,176-183
    DIE 0 MC 7: 56-63,184-191

- DIE1: 64-127,192-255

    DIE 1 MC 0: 64-71,192-199
    DIE 1 MC 1: 72-79,200-207
    DIE 1 MC 2: 80-87,208-215
    DIE 1 MC 3: 88-95,216-223
    DIE 1 MC 4: 96-103,224-231
    DIE 1 MC 5: 104-111,232-239
    DIE 1 MC 6: 112-119,240-247
    DIE 1 MC 7: 120-127,248-255
> 
> Thanks for the testing.
> 
> Libo
>> Wakeups on same                                    NUMA    :                  860 ( 0.01363 )                     80 ( 0.00106 )       to wake on the same LLC where it previously
>> Affine wakeups on same                             SMT     :                   25 ( 0.00040 )                    255 ( 0.00338 )       ran as compared to the LLC of waker.
>> Affine wakeups on same                             MC      :              6282684 ( 99.55% of time ) <---        961 ( 0.01272 )       This results in performance degradation as
>> Affine wakeups on same                             DIE     :                  523 ( 0.00829 )                7537220 ( 99.80235 ) <--- the task is farther away from data it will
>> Affine wakeups on same                             NUMA    :                  839 ( 0.01329 )                     46 ( 0.00061 )       consume next.
>>
>> All the statistics are comparable except for the reduced number of affine
>> wakeup on the waker's LLC that resulting in task being placed on the previous
>> LLC farther away from the data that resides in the waker's LLC that the wakee
>> will consume next.
>>
>> All wakeups in the tbench, happens in_serving_softirq() making in_taks() false
>> for all the cases where sync would have been true otherwise.
>>
>> We wanted to highlight there are workloads which would still benefit from
>> affine wakeups even when it happens in an interrupt context. It would be
>> great if we could spot such cases and bias wakeups towards waker's LLC.
>>
>> Other benchmarks results are comparable to the tip in most cases.
>> All benchmarks were run on machine configured in NPS1 mode.
>> Following are the results:
>>
>> ~~~~~~~~~
>> hackbench
>> ~~~~~~~~~
>>
>> Test:             tip (91caa5ae2424)      tip (14b3f2d9ee8d)
>>   1-groups:         4.22 (0.00 pct)         4.48 (-6.16 pct)     *
>>   1-groups:         4.22 (0.00 pct)         4.30 (-1.89 pct)     [Verification run]
>>   2-groups:         5.01 (0.00 pct)         4.87 (2.79 pct)
>>   4-groups:         5.49 (0.00 pct)         5.34 (2.73 pct)
>>   8-groups:         5.64 (0.00 pct)         5.50 (2.48 pct)
>> 16-groups:         7.54 (0.00 pct)         7.34 (2.65 pct)
>>
>> ~~~~~~~~
>> schbench
>> ~~~~~~~~
>>
>> #workers: tip (91caa5ae2424)     tip (14b3f2d9ee8d)
>>    1:      22.00 (0.00 pct)        22.00 (0.00 pct)
>>    2:      22.00 (0.00 pct)        27.00 (-22.72 pct)    * Known to have run to run
>>    4:      33.00 (0.00 pct)        38.00 (-15.15 pct)    * variations.
>>    8:      48.00 (0.00 pct)        51.00 (-6.25 pct)     *
>>   16:      70.00 (0.00 pct)        70.00 (0.00 pct)
>>   32:     118.00 (0.00 pct)       120.00 (-1.69 pct)
>>   64:     217.00 (0.00 pct)       223.00 (-2.76 pct)
>> 128:     485.00 (0.00 pct)       488.00 (-0.61 pct)
>> 256:     1066.00 (0.00 pct)      1054.00 (1.12 pct)
>> 512:     47296.00 (0.00 pct)     47168.00 (0.27 pct)
>>
>> Note: schbench results at lower worker count have a large
>> run-to-run variance and depends on certain characteristics
>> of new-idle balance.
>>
>> ~~~~~~
>> stream
>> ~~~~~~
>>
>> - 10 runs
>>
>> Test:     tip (91caa5ae2424)      tip (14b3f2d9ee8d)
>>   Copy:   336140.45 (0.00 pct)    334362.29 (-0.52 pct)
>> Scale:   214679.13 (0.00 pct)    218016.44 (1.55 pct)
>>    Add:   251691.67 (0.00 pct)    249734.04 (-0.77 pct)
>> Triad:   262174.93 (0.00 pct)    260063.57 (-0.80 pct)
>>
>> - 100 runs
>>
>> Test:     tip (91caa5ae2424)      tip (14b3f2d9ee8d)
>>   Copy:   336576.38 (0.00 pct)    334646.27 (-0.57 pct)
>> Scale:   219124.86 (0.00 pct)    223480.29 (1.98 pct)
>>    Add:   251796.93 (0.00 pct)    250845.95 (-0.37 pct)
>> Triad:   262286.47 (0.00 pct)    258020.57 (-1.62 pct)
>>
>> ~~~~~~~~~~~~
>> ycsb-mongodb
>> ~~~~~~~~~~~~
>>
>> tip (91caa5ae2424):   290479.00 (var: 1.53)
>> tip (14b3f2d9ee8d):   287361.67 (var: 0.80) (-1.07 pct)
>>
>>> [..snip..]
> 

Thank you again for looking into this issue and for sharing the
observations on the Zen2 system.
--
Thanks and Regards,
Prateek

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ