lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <234bfc8a-c60d-c375-f681-e4230d8c5a20@linux.alibaba.com>
Date:   Wed, 18 Mar 2020 10:23:49 +0800
From:   王贇 <yun.wang@...ux.alibaba.com>
To:     Ingo Molnar <mingo@...hat.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Juri Lelli <juri.lelli@...hat.com>,
        Vincent Guittot <vincent.guittot@...aro.org>,
        Dietmar Eggemann <dietmar.eggemann@....com>,
        Steven Rostedt <rostedt@...dmis.org>,
        Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
        "open list:SCHEDULER" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v2] sched: avoid scale real weight down to zero

Hi Peter, Vincent

My apologies to missing the case when CONFIG_FAIR_GROUP_SCHED
is disabled, I've replaced the MIN_SHARE with 2UL as it was
defined, sorry for the trouble...

Regards,
Michael Wang

On 2020/3/18 上午10:15, 王贇 wrote:
> During our testing, we found a case that shares no longer
> working correctly, the cgroup topology is like:
> 
>   /sys/fs/cgroup/cpu/A		(shares=102400)
>   /sys/fs/cgroup/cpu/A/B	(shares=2)
>   /sys/fs/cgroup/cpu/A/B/C	(shares=1024)
> 
>   /sys/fs/cgroup/cpu/D		(shares=1024)
>   /sys/fs/cgroup/cpu/D/E	(shares=1024)
>   /sys/fs/cgroup/cpu/D/E/F	(shares=1024)
> 
> The same benchmark is running in group C & F, no other tasks are
> running, the benchmark is capable to consumed all the CPUs.
> 
> We suppose the group C will win more CPU resources since it could
> enjoy all the shares of group A, but it's F who wins much more.
> 
> The reason is because we have group B with shares as 2, since
> A->cfs_rq.load.weight == B->se.load.weight == B->shares/nr_cpus,
> so A->cfs_rq.load.weight become very small.
> 
> And in calc_group_shares() we calculate shares as:
> 
>   load = max(scale_load_down(cfs_rq->load.weight), cfs_rq->avg.load_avg);
>   shares = (tg_shares * load) / tg_weight;
> 
> Since the 'cfs_rq->load.weight' is too small, the load become 0
> after scale down, although 'tg_shares' is 102400, shares of the se
> which stand for group A on root cfs_rq become 2.
> 
> While the se of D on root cfs_rq is far more bigger than 2, so it
> wins the battle.
> 
> Thus when scale_load_down() scale real weight down to 0, it's no
> longer telling the real story, the caller will have the wrong
> information and the calculation will be buggy.
> 
> This patch add check in scale_load_down(), so the real weight will
> be >= MIN_SHARES after scale, after applied the group C wins as
> expected.
> 
> Cc: Ben Segall <bsegall@...gle.com>
> Reviewed-by: Vincent Guittot <vincent.guittot@...aro.org>
> Suggested-by: Peter Zijlstra <peterz@...radead.org>
> Signed-off-by: Michael Wang <yun.wang@...ux.alibaba.com>
> ---
> v2:
>   * replace MIN_SHARE with 2UL to cover CONFIG_FAIR_GROUP_SCHED=n case
> 
>  kernel/sched/sched.h | 8 +++++++-
>  1 file changed, 7 insertions(+), 1 deletion(-)
> 
> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> index 2a0caf394dd4..9bca26bd60d9 100644
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -118,7 +118,13 @@ extern long calc_load_fold_active(struct rq *this_rq, long adjust);
>  #ifdef CONFIG_64BIT
>  # define NICE_0_LOAD_SHIFT	(SCHED_FIXEDPOINT_SHIFT + SCHED_FIXEDPOINT_SHIFT)
>  # define scale_load(w)		((w) << SCHED_FIXEDPOINT_SHIFT)
> -# define scale_load_down(w)	((w) >> SCHED_FIXEDPOINT_SHIFT)
> +# define scale_load_down(w) \
> +({ \
> +	unsigned long __w = (w); \
> +	if (__w) \
> +		__w = max(2UL, __w >> SCHED_FIXEDPOINT_SHIFT); \
> +	__w; \
> +})
>  #else
>  # define NICE_0_LOAD_SHIFT	(SCHED_FIXEDPOINT_SHIFT)
>  # define scale_load(w)		(w)
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ