[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200923154744.GL3117@suse.de>
Date: Wed, 23 Sep 2020 16:47:44 +0100
From: Mel Gorman <mgorman@...e.de>
To: Vincent Guittot <vincent.guittot@...aro.org>
Cc: mingo@...hat.com, peterz@...radead.org, juri.lelli@...hat.com,
dietmar.eggemann@....com, rostedt@...dmis.org, bsegall@...gle.com,
linux-kernel@...r.kernel.org, valentin.schneider@....com,
pauld@...hat.com, hdanton@...a.com
Subject: Re: [PATCH 0/4 v2] sched/fair: Improve fairness between cfs tasks
On Mon, Sep 21, 2020 at 09:24:20AM +0200, Vincent Guittot wrote:
> When the system doesn't have enough cycles for all tasks, the scheduler
> must ensure a fair split of those CPUs cycles between CFS tasks. The
> fairness of some use cases can't be solved with a static distribution of
> the tasks on the system and requires a periodic rebalancing of the system
> but this dynamic behavior is not always optimal and the fair distribution
> of the CPU's time is not always ensured.
>
FWIW, nothing bad fell out of the series from a battery of scheduler
tests across various machines. Headline-wise, EPYC 1 looked very bad for
hackbench but a detailed look showed that it was great until the very
highest group count when it looked bad. Otherwise EPYC 1 looked good
as-did EPYC 2. Various generation of Intel boxes showed marginal gains
or losses, nothing dramatic. will-it-scale for various test loads looks
looked fractionally worse across some machines which may how up in the
0-day bot but it probably will be marginal.
As the patches are partially magic numbers which you could reason about
either way, I'm not going to say that it's universally better. However
it's slightly better in normal cases, your tests indicate its good for
a specific corner case and it does not look like anything obvious falls
apart.
--
Mel Gorman
SUSE Labs
Powered by blists - more mailing lists