[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKfTPtDrSzET+=G7rHvhDY3491CzGvp3ZqW0cqR8jhC1EvC2mQ@mail.gmail.com>
Date: Wed, 4 Mar 2020 12:55:21 +0100
From: Vincent Guittot <vincent.guittot@...aro.org>
To: Peter Zijlstra <peterz@...radead.org>
Cc: 王贇 <yun.wang@...ux.alibaba.com>,
Ingo Molnar <mingo@...hat.com>,
Juri Lelli <juri.lelli@...hat.com>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
"open list:SCHEDULER" <linux-kernel@...r.kernel.org>
Subject: Re: [RFC PATCH] sched: fix the nonsense shares when load of cfs_rq is
too, small
On Wed, 4 Mar 2020 at 10:52, Peter Zijlstra <peterz@...radead.org> wrote:
>
> On Wed, Mar 04, 2020 at 09:47:34AM +0100, Vincent Guittot wrote:
> > you will add +1 of nice prio for each device
> >
> > should we use instead
> > # define scale_load_down(w) ((w >> SCHED_FIXEDPOINT_SHIFT) ? (w >>
> > SCHED_FIXEDPOINT_SHIFT) : MIN_SHARES)
>
> That's '((w >> SHIFT) ?: MIN_SHARES)', but even that is not quite right.
>
> I think we want something like:
>
> #define scale_load_down(w) \
> ({ unsigned long ___w = (w); \
> if (___w) \
> ____w = max(MIN_SHARES, ___w >> SHIFT); \
> ___w; })
>
> That is, we very much want to retain 0 I'm thinking.
yes, you're right
Powered by blists - more mailing lists