[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <xm26zfu7ecpu.fsf@google.com>
Date: Fri, 05 Apr 2024 14:11:57 -0700
From: Benjamin Segall <bsegall@...gle.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: mingo@...hat.com, juri.lelli@...hat.com, vincent.guittot@...aro.org,
dietmar.eggemann@....com, rostedt@...dmis.org, mgorman@...e.de,
bristot@...hat.com, vschneid@...hat.com, linux-kernel@...r.kernel.org,
kprateek.nayak@....com, wuyun.abel@...edance.com, tglx@...utronix.de,
efault@....de
Subject: Re: [RFC][PATCH 03/10] sched/fair: Cleanup pick_task_fair() vs
throttle
Peter Zijlstra <peterz@...radead.org> writes:
> Per 54d27365cae8 ("sched/fair: Prevent throttling in early
> pick_next_task_fair()") the reason check_cfs_rq_runtime() is under the
> 'if (curr)' check is to ensure the (downward) traversal does not
> result in an empty cfs_rq.
>
> But then the pick_task_fair() 'copy' of all this made it restart the
> traversal anyway, so that seems to solve the issue too.
Yeah, putting the check_cfs_rq_runtime inside of that condition was
specific to the exact pnt_fair code, and the specific places that
nr_running was and was not checked. pick_task_fair doesn't care about
any of that, and if instead put_prev manages to throttle the picked
task, we can still successfully switch to it for a moment.
Reviewed-by: Ben Segall <bsegall@...gle.com>
>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
> ---
> kernel/sched/fair.c | 6 +++---
> 1 file changed, 3 insertions(+), 3 deletions(-)
>
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -8435,11 +8435,11 @@ static struct task_struct *pick_task_fai
> update_curr(cfs_rq);
> else
> curr = NULL;
> -
> - if (unlikely(check_cfs_rq_runtime(cfs_rq)))
> - goto again;
> }
>
> + if (unlikely(check_cfs_rq_runtime(cfs_rq)))
> + goto again;
> +
> se = pick_next_entity(cfs_rq);
> cfs_rq = group_cfs_rq(se);
> } while (cfs_rq);
Powered by blists - more mailing lists