[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <f2bd581e-3dc5-2630-7ba9-2241f2ea3360@linux.alibaba.com>
Date: Tue, 10 Mar 2020 16:15:19 +0800
From: ηθ΄ <yun.wang@...ux.alibaba.com>
To: Vincent Guittot <vincent.guittot@...aro.org>
Cc: Ben Segall <bsegall@...gle.com>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>,
Juri Lelli <juri.lelli@...hat.com>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Mel Gorman <mgorman@...e.de>,
"open list:SCHEDULER" <linux-kernel@...r.kernel.org>
Subject: Re: [RFC PATCH] sched: fix the nonsense shares when load of cfs_rq is
too, small
On 2020/3/10 δΈε3:57, Vincent Guittot wrote:
[snip]
>>> That being said, having a min of 2 for scale_load_down will enable us
>>> to have the tg->load_avg != 0 so a tg_weight != 0 and each sched group
>>> will not have the full shares. But it will make those group completely
>>> fair anyway.
>>> The best solution would be not to scale down the weight but that's a
>>> bigger change
>>
>> Does that means a changing for all those 'load.weight' related
>> calculation, to reserve the scaled weight?
>
> yes, to make sure that calculation still fit in the variable
>
>>
>> I suppose u64 is capable for 'cfs_rq.load' to reserve the scaled up load,
>> changing all those places could be annoying but still fine.
>
> it's fine but the max number of runnable tasks at the max priority on
> a cfs_rq will decrease from around 4 billion to "only" 4 Million.
>
>>
>> However, I'm not quite sure about the benefit, how much more precision
>> we'll gain and does that really matters? better to have some testing to
>> demonstrate it.
>
> it will ensure a better fairness in a larger range of share value. I
> agree that we can wonder if it's worth the effort for those low share
> values. Wouldbe interesting to knwo who use such low value and for
> which purpose
AFAIK, the k8s stuff will use share 2 for the Best Effort type of Pods,
but that's just because they want them run only when there are no other
Pods want running, won't dealing with multiple shares under 1024 and
desire good precision I suppose.
Regards,
Michael Wang
>
> Regards,
> Vincent
>>
>> Regards,
>> Michael Wang
>>
>>
>>>
Powered by blists - more mailing lists