[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YNDFFkh2Dn0hMqS8@hirez.programming.kicks-ass.net>
Date: Mon, 21 Jun 2021 18:57:58 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
Cc: linux-kernel@...r.kernel.org, stable@...r.kernel.org,
Odin Ugedal <odin@...d.al>,
Vincent Guittot <vincent.guittot@...aro.org>,
Sasha Levin <sashal@...nel.org>
Subject: Re: [PATCH 5.10 095/146] sched/fair: Correctly insert cfs_rqs to
list on unthrottle
On Mon, Jun 21, 2021 at 06:15:25PM +0200, Greg Kroah-Hartman wrote:
> From: Odin Ugedal <odin@...d.al>
>
> [ Upstream commit a7b359fc6a37faaf472125867c8dc5a068c90982 ]
>
> Fix an issue where fairness is decreased since cfs_rq's can end up not
> being decayed properly. For two sibling control groups with the same
> priority, this can often lead to a load ratio of 99/1 (!!).
>
> This happens because when a cfs_rq is throttled, all the descendant
> cfs_rq's will be removed from the leaf list. When they initial cfs_rq
> is unthrottled, it will currently only re add descendant cfs_rq's if
> they have one or more entities enqueued. This is not a perfect
> heuristic.
>
> Instead, we insert all cfs_rq's that contain one or more enqueued
> entities, or it its load is not completely decayed.
>
> Can often lead to situations like this for equally weighted control
> groups:
>
> $ ps u -C stress
> USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
> root 10009 88.8 0.0 3676 100 pts/1 R+ 11:04 0:13 stress --cpu 1
> root 10023 3.0 0.0 3676 104 pts/1 R+ 11:04 0:00 stress --cpu 1
>
> Fixes: 31bc6aeaab1d ("sched/fair: Optimize update_blocked_averages()")
> [vingo: !SMP build fix]
> Signed-off-by: Odin Ugedal <odin@...d.al>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
> Reviewed-by: Vincent Guittot <vincent.guittot@...aro.org>
> Link: https://lore.kernel.org/r/20210612112815.61678-1-odin@uged.al
> Signed-off-by: Sasha Levin <sashal@...nel.org>
This one is currently known to cause some LTP fail, fixes are being
discussed, please hold off for now.
Powered by blists - more mailing lists