[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0489ca96-29a3-921e-ca29-00108929a041@linux.alibaba.com>
Date: Thu, 5 Mar 2020 09:08:10 +0800
From: ηθ΄ <yun.wang@...ux.alibaba.com>
To: Peter Zijlstra <peterz@...radead.org>,
Vincent Guittot <vincent.guittot@...aro.org>
Cc: Ingo Molnar <mingo@...hat.com>, Juri Lelli <juri.lelli@...hat.com>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
"open list:SCHEDULER" <linux-kernel@...r.kernel.org>
Subject: Re: [RFC PATCH] sched: fix the nonsense shares when load of cfs_rq is
too, small
On 2020/3/4 δΈε5:52, Peter Zijlstra wrote:
> On Wed, Mar 04, 2020 at 09:47:34AM +0100, Vincent Guittot wrote:
>> you will add +1 of nice prio for each device
>>
>> should we use instead
>> # define scale_load_down(w) ((w >> SCHED_FIXEDPOINT_SHIFT) ? (w >>
>> SCHED_FIXEDPOINT_SHIFT) : MIN_SHARES)
>
> That's '((w >> SHIFT) ?: MIN_SHARES)', but even that is not quite right.
>
> I think we want something like:
>
> #define scale_load_down(w) \
> ({ unsigned long ___w = (w); \
> if (___w) \
> ____w = max(MIN_SHARES, ___w >> SHIFT); \
> ___w; })
>
> That is, we very much want to retain 0 I'm thinking.
Should works, I'll give this one a test and send another fix :-)
Regards,
Michael Wang
>
Powered by blists - more mailing lists