lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <b596cb41-9c62-4134-a76d-6139ae859b07@126.com>
Date: Tue, 11 Jun 2024 19:39:42 +0800
From: Honglei Wang <jameshongleiwang@....com>
To: Chunxin Zang <spring.cxz@...il.com>
Cc: dietmar.eggemann@....com, rostedt@...dmis.org, bsegall@...gle.com,
 mgorman@...e.de, bristot@...hat.com, vschneid@...hat.com,
 linux-kernel@...r.kernel.org, Chen Yu <yu.c.chen@...el.com>,
 yangchen11@...iang.com, Jerry Zhou <zhouchunhua@...iang.com>,
 Chunxin Zang <zangchunxin@...iang.com>, mingo@...hat.com,
 Peter Zijlstra <peterz@...radead.org>, juri.lelli@...hat.com,
 vincent.guittot@...aro.org
Subject: Re: [PATCH] sched/fair: Reschedule the cfs_rq when current is
 ineligible



On 2024/6/6 20:39, Chunxin Zang wrote:

> 
> Hi honglei
> 
> Recently, I conducted testing of multiple cgroups using version 2. Version 2 ensures the
> RUN_TO_PARITY feature, so the test results are somewhat better under the
> NO_RUN_TO_PARITY feature.
> https://lore.kernel.org/lkml/20240529141806.16029-1-spring.cxz@gmail.com/T/
> 
> The testing environment I used still employed 4 cores,  4 groups of hackbench (160 processes)
> and 1 cyclictest. If too many cgroups or processes are created on the 4 cores, the test
> results will fluctuate severely, making it difficult to discern any differences.
> 
> The organization of cgroups was in two forms:
> 1. Within the same level cgroup, 10 sub-cgroups were created, with each cgroup having
>    an average of 16 processes.
> 
>                                    EEVDF      PATCH  EEVDF-NO_PARITY  PATCH-NO_PARITY
> 
>     LNICE(-19)    # Avg Latencies: 00572      00347      00502      00218
> 
>     LNICE(0)      # Avg Latencies: 02262      02225      02442      02321
> 
>     LNICE(19)     # Avg Latencies: 03132      03422      03333      03489
> 
> 2. In the form of a binary tree, 8 leaf cgroups were established, with a depth of 4.
>    On average, each cgroup had 20 processes
> 
>                                    EEVDF      PATCH  EEVDF-NO_PARITY  PATCH-NO_PARITY
> 
>     LNICE(-19)    # Avg Latencies: 00601      00592      00510      00400
> 
>     LNICE(0)      # Avg Latencies: 02703      02170      02381      02126
> 
>     LNICE(19)     # Avg Latencies: 04773      03387      04478      03611
> 
> Based on the test results, there is a noticeable improvement in scheduling latency after
> applying the patch in scenarios involving multiple cgroups.
> 
> 
> thanks
> Chunxin
> 
Hi Chunxin,

Thanks for sharing the test result. It looks helpful at least in this 
cgroups scenario. I'm still curious which point of the two changes helps 
more in your test, just as mentioned at the very first mail of this thread.

Thanks,
Honglei


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ