[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0e153dd25900af70f91e4a73f960320e6daf3c6a.camel@gmx.de>
Date: Tue, 22 Aug 2023 08:09:49 +0200
From: Mike Galbraith <efault@....de>
To: K Prateek Nayak <kprateek.nayak@....com>,
"Peter Zijlstra (Intel)" <peterz@...radead.org>
Cc: linux-kernel@...r.kernel.org, linux-tip-commits@...r.kernel.org,
x86@...nel.org, Chen Yu <yu.c.chen@...el.com>,
Gautham Shenoy <gautham.shenoy@....com>
Subject: Re: [tip: sched/core] sched/eevdf: Curb wakeup-preemption
On Tue, 2023-08-22 at 08:33 +0530, K Prateek Nayak wrote:
> Hello Mike,
Greetings!
> > FWIW, there are more tbench shards lying behind EEVDF than in front.
> >
> > tbench 8 on old i7-4790 box
> > 4.4.302 4024
> > 6.4.11 3668
> > 6.4.11-eevdf 3522
> >
>
> I agree, but on servers, tbench has been useful to identify a variety of
> issues [1][2][3] and I believe it is better to pick some shards up than
> leave them lying around for others to step on :)
Absolutely, but in this case it isn't due to various overheads wiggling
about and/or bitrot, everything's identical except the scheduler, and
its overhead essentially is too.
taskset -c 3 pipe-test
6.4.11 1.420033 usecs/loop -- avg 1.420033 1408.4 KHz
6.4.11-eevdf 1.413024 usecs/loop -- avg 1.413024 1415.4 KHz
Methinks these shards are due to tbench simply being one of those
things that happens to like the CFS notion of short term fairness a bit
better than the EEVDF notion, ie are inevitable fallout tied to the
very thing that makes EEVDF service less spiky that CFS, and thus will
be difficult to sweep up.
Too bad I didn't save Peter's test hack to make EEVDF use the same
notion of fair (not a keeper) as I think that would likely prove it.
-Mike
Powered by blists - more mailing lists