[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5f0824ea-d7b6-61f5-db89-1727bae83595@linux.vnet.ibm.com>
Date: Thu, 10 Aug 2023 21:14:06 +0530
From: Shrikanth Hegde <sshegde@...ux.vnet.ibm.com>
To: Vishal Chourasia <vishalc@...ux.ibm.com>
Cc: peterz@...radead.org, vincent.guittot@...aro.org,
srikar@...ux.vnet.ibm.com, linux-kernel@...r.kernel.org,
mingo@...hat.com, dietmar.eggemann@....com, mgorman@...e.de
Subject: Re: [RFC PATCH] sched/fair: Skip idle CPU search on busy system
On 8/10/23 12:14 AM, Vishal Chourasia wrote:
> On Wed, Jul 26, 2023 at 03:06:12PM +0530, Shrikanth Hegde wrote:
>> When the system is fully busy, there will not be any idle CPU's.
>>
> Tested this patchset on top of v6.4
[...]
> 5 Runs of stress-ng (100% load) on a system with 16CPUs spawning 23 threads for
> 60 minutes.
>
> stress-ng: 16CPUs, 23threads, 60mins
>
> - 6.4.0
>
> | completion time(sec) | user | sys |
> |----------------------+-----------+------------|
> | 3600.05 | 57582.44 | 0.70 |
> | 3600.10 | 57597.07 | 0.68 |
> | 3600.05 | 57596.65 | 0.47 |
> | 3600.04 | 57596.36 | 0.71 |
> | 3600.06 | 57595.32 | 0.42 |
> | 3600.06 | 57593.568 | 0.596 | average
> | 0.046904158 | 12.508392 | 0.27878307 | stddev
>
> - 6.4.0+ (with patch)
>
> | completion time(sec) | user | sys |
> |----------------------+-----------+-------------|
> | 3600.04 | 57596.58 | 0.50 |
> | 3600.04 | 57595.19 | 0.48 |
> | 3600.05 | 57597.39 | 0.49 |
> | 3600.04 | 57596.64 | 0.53 |
> | 3600.04 | 57595.94 | 0.43 |
> | 3600.042 | 57596.348 | 0.486 | average
> | 0.0089442719 | 1.6529610 | 0.072938330 | stddev
>
> The average system time is slightly lower in the patched version (0.486 seconds)
> compared to the 6.4.0 version (0.596 seconds).
> The standard deviation for system time is also lower in the patched version
> (0.0729 seconds) than in the 6.4.0 version (0.2788 seconds), suggesting more
> consistent system time results with the patch.
>
> vishal.c
Thank you very much Vishal for trying this out.
Meanwhile, I am yet to try the suggestion given by chen. Let me see if that works okay.
Powered by blists - more mailing lists