lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAKfTPtASHg-KP1aN6C5hg-RjUXjXoorwvHQonrt7eggMkXow4w@mail.gmail.com>
Date:   Mon, 14 Nov 2022 11:40:47 +0100
From:   Vincent Guittot <vincent.guittot@...aro.org>
To:     shrikanth suresh hegde <sshegde@...ux.vnet.ibm.com>
Cc:     mingo@...hat.com, peterz@...radead.org, juri.lelli@...hat.com,
        dietmar.eggemann@....com, rostedt@...dmis.org, bsegall@...gle.com,
        mgorman@...e.de, bristot@...hat.com, vschneid@...hat.com,
        linux-kernel@...r.kernel.org, parth@...ux.ibm.com,
        qais.yousef@....com, chris.hyser@...cle.com,
        patrick.bellasi@...bug.net, David.Laight@...lab.com,
        pjt@...gle.com, pavel@....cz, tj@...nel.org, qperret@...gle.com,
        tim.c.chen@...ux.intel.com, joshdon@...gle.com, timj@....org,
        kprateek.nayak@....com, yu.c.chen@...el.com,
        youssefesmat@...omium.org, joel@...lfernandes.org
Subject: Re: [PATCH v7 0/9] Add latency priority for CFS class

On Sun, 13 Nov 2022 at 09:51, shrikanth suresh hegde
<sshegde@...ux.vnet.ibm.com> wrote:
>
>
> > This patchset restarts the work about adding a latency priority to describe
> > the latency tolerance of cfs tasks.
>
> Hi Vincent.
>
> Tested the patches on the power10 machine. It is 80 core system with SMT=8. i.e
> total of 640 cpus. on the large workload which mainly interacts with the
> database there is minor improvement of 3-5%.
>
> the method followed is creating a cgroup, assigning a latency nice value of -20,
> -10, 0 and adding the tasks to procs of the cgroup. outside of cgroup, stress-ng
> load is running and it is not set any latency value. stress-ng --cpu=768 -l 50
>
> with microbenchmarks, hackbench the values are more or less the same. for large
> process pool of 60, there is 10% improvement. schbench tail latencies show
> significant improvement with low and medium load upto 256 groups. only 512
> groups shows a slight decline.
>
> Hackbench (Iterations or N=50)
> Process             6.1_Base        6.1_Latency_Nice
> 10                      0.13            0.14
> 20                      0.18            0.18
> 30                      0.24            0.25
> 40                      0.34            0.33
> 50                      0.40            0.41
> 60                      0.53            0.49
>
> schbench (Iterations or N=5)
>
> Groups: 1
>                      6.1_Base        6.1_Latency_Nice
> 50.0th:                 10.8             9.8
> 75.0th:                 12.4            11.4
> 90.0th:                 14.2            13.2
> 95.0th:                 15.6            14.6
> 99.0th:                 27.8            19.0
> 99.5th:                 38.0            21.6
> 99.9th:                 66.2            25.4
>
> Groups: 2
>                      6.1_Base        6.1_Latency_Nice
> 50.0th:                 11.2            10.8
> 75.0th:                 13.2            12.4
> 90.0th:                 15.0            15.0
> 95.0th:                 16.6            16.6
> 99.0th:                 22.4            22.8
> 99.5th:                 23.8            27.8
> 99.9th:                 30.2            45.6
>
> Groups: 4
>                      6.1_Base        6.1_Latency_Nice
> 50.0th:                 13.8            11.2
> 75.0th:                 16.0            13.2
> 90.0th:                 18.6            15.2
> 95.0th:                 20.4            16.6
> 99.0th:                 28.8            21.6
> 99.5th:                 48.8            25.2
> 99.9th:                900.2            47.0
>
> Groups: 8
>                      6.1_Base        6.1_Latency_Nice
> 50.0th:                 17.8            14.4
> 75.0th:                 21.8            17.2
> 90.0th:                 25.4            20.4
> 95.0th:                 28.0            22.4
> 99.0th:                 52.8            28.4
> 99.5th:                156.4            32.6
> 99.9th:               1990.2            52.0
>
> Groups: 16
>                      6.1_Base        6.1_Latency_Nice
> 50.0th:                 26.0            21.0
> 75.0th:                 33.0            27.8
> 90.0th:                 39.6            34.4
> 95.0th:                 43.4            38.6
> 99.0th:                 66.8            48.8
> 99.5th:                170.6            60.6
> 99.9th:               3308.8           201.6
>
> Groups: 32
>                      6.1_Base        6.1_Latency_Nice
> 50.0th:                 40.8            38.6
> 75.0th:                 55.4            52.8
> 90.0th:                 67.0            64.2
> 95.0th:                 74.2            71.6
> 99.0th:                106.0            90.0
> 99.5th:                323.8           133.0
> 99.9th:               4789.6           459.2
>
> Groups: 64
>                      6.1_Base        6.1_Latency_Nice
> 50.0th:                 72.6            68.2
> 75.0th:                103.4            97.8
> 90.0th:                127.6           120.0
> 95.0th:                141.2           132.0
> 99.0th:                343.4           158.4
> 99.5th:               1609.0           180.8
> 99.9th:               6571.2           686.6
>
> Groups: 128
>                      6.1_Base        6.1_Latency_Nice
> 50.0th:                147.2           147.2
> 75.0th:                216.4           217.2
> 90.0th:                268.4           268.2
> 95.0th:                300.6           294.8
> 99.0th:               3500.0           638.6
> 99.5th:               5995.2          2522.8
> 99.9th:              10390.4          9451.2
>
> Groups: 256
>                      6.1_Base        6.1_Latency_Nice
> 50.0th:                340.8           333.2
> 75.0th:                551.8           530.2
> 90.0th:               3528.4          1919.2
> 95.0th:               7312.8          5558.4
> 99.0th:              14630.4         12912.0
> 99.5th:              17955.2         14950.4
> 99.9th:              23059.2         20230.4
>
> Groups: 512
>                      6.1_Base        6.1_Latency_Nice
> 50.0th:               1021.8           990.6
> 75.0th:               9545.6         10044.8
> 90.0th:              20972.8         21638.4
> 95.0th:              29971.2         30291.2
> 99.0th:              42355.2         46707.2
> 99.5th:              48550.4         52057.6
> 99.9th:              58867.2         60147.2
>
> Tested-by: shrikanth Hegde <sshegde@...ux.vnet.ibm.com>

Thanks for the tests and the results


>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ