[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f0859f35-39ec-e5dc-b77a-79162516de31@amd.com>
Date: Tue, 22 Aug 2023 08:33:36 +0530
From: K Prateek Nayak <kprateek.nayak@....com>
To: Mike Galbraith <efault@....de>,
"Peter Zijlstra (Intel)" <peterz@...radead.org>
Cc: linux-kernel@...r.kernel.org, linux-tip-commits@...r.kernel.org,
x86@...nel.org, Chen Yu <yu.c.chen@...el.com>,
Gautham Shenoy <gautham.shenoy@....com>
Subject: Re: [tip: sched/core] sched/eevdf: Curb wakeup-preemption
Hello Mike,
On 8/21/2023 9:00 PM, Mike Galbraith wrote:
> On Mon, 2023-08-21 at 16:09 +0530, K Prateek Nayak wrote:
>> Hello Peter,
>>
>> Sorry for being late to the party but couple of benchmarks are unhappy
>> (very!) with eevdf, even with this optimization. I'll leave the results
>> of testing on a dual socket 3rd Generation EPYC System (2 x 64C/128T)
>> running in NPS1 mode below.
>>
>> tl;dr
>>
>> - Hackbench with medium load, tbench when overloaded, and DeathStarBench
>> are not a fan of EEVDF so far :(
>
> FWIW, there are more tbench shards lying behind EEVDF than in front.
>
> tbench 8 on old i7-4790 box
> 4.4.302 4024
> 6.4.11 3668
> 6.4.11-eevdf 3522
>
I agree, but on servers, tbench has been useful to identify a variety of
issues [1][2][3] and I believe it is better to pick some shards up than
leave them lying around for others to step on :)
Casting aside tbench, there are still more workloads that have
regression and it'll be good to understand which property of those don't
sit well with EEVDF.
[1] https://lore.kernel.org/lkml/c50bdbfe-02ce-c1bc-c761-c95f8e216ca0@amd.com/
[2] https://lore.kernel.org/lkml/20220921063638.2489-1-kprateek.nayak@amd.com/
[3] https://lore.kernel.org/lkml/80956e8f-761e-b74-1c7a-3966f9e8d934@linutronix.de/
> I went a-hunting once, but it didn't go well. There were a couple
> identifiable sched related dips/recoveries, but the overall result was
> a useless downward trending mess.
>
> -Mike
--
Thanks and Regards,
Prateek
Powered by blists - more mailing lists