[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <xhsmh5xz0gzyi.mognet@vschneid-thinkpadt14sgen2i.remote.csb>
Date: Wed, 07 Feb 2024 14:34:45 +0100
From: Valentin Schneider <vschneid@...hat.com>
To: Benjamin Segall <bsegall@...gle.com>
Cc: linux-kernel@...r.kernel.org, Ingo Molnar <mingo@...hat.com>, Peter
Zijlstra <peterz@...radead.org>, Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>, Dietmar Eggemann
<dietmar.eggemann@....com>, Steven Rostedt <rostedt@...dmis.org>, Mel
Gorman <mgorman@...e.de>, Daniel Bristot de Oliveira <bristot@...hat.com>,
Phil Auld <pauld@...hat.com>, Clark Williams <williams@...hat.com>, Tomas
Glozar <tglozar@...hat.com>
Subject: Re: [RFC PATCH v2 3/5] sched/fair: Delete cfs_rq_throttled_loose(),
use cfs_rq->throttle_pending instead
On 06/02/24 13:36, Benjamin Segall wrote:
> Valentin Schneider <vschneid@...hat.com> writes:
>
>> cfs_rq_throttled_loose() does not check if there is runtime remaining in
>> the cfs_b, and thus relies on check_cfs_rq_runtime() being ran previously
>> for that to be checked.
>>
>> Cache the throttle attempt in throttle_cfs_rq and reuse that where
>> needed.
>
> The general idea of throttle_pending rather than constantly checking
> runtime_remaining seems reasonable...
>
>>
>> Signed-off-by: Valentin Schneider <vschneid@...hat.com>
>> ---
>> kernel/sched/fair.c | 44 ++++++++++----------------------------------
>> 1 file changed, 10 insertions(+), 34 deletions(-)
>>
>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>> index 96504be6ee14a..60778afbff207 100644
>> --- a/kernel/sched/fair.c
>> +++ b/kernel/sched/fair.c
>> @@ -5462,7 +5462,7 @@ set_next_entity(struct cfs_rq *cfs_rq, struct sched_entity *se)
>> * 5) do not run the "skip" process, if something else is available
>> */
>> static struct sched_entity *
>> -pick_next_entity(struct cfs_rq *cfs_rq, bool throttled)
>> +pick_next_entity(struct cfs_rq *cfs_rq)
>> {
>> #ifdef CONFIG_CFS_BANDWIDTH
>> /*
>> @@ -5473,7 +5473,7 @@ pick_next_entity(struct cfs_rq *cfs_rq, bool throttled)
>> * throttle_cfs_rq.
>> */
>> WARN_ON_ONCE(list_empty(&cfs_rq->kernel_children));
>> - if (throttled && !list_empty(&cfs_rq->kernel_children)) {
>> + if (cfs_rq->throttle_pending && !list_empty(&cfs_rq->kernel_children)) {
>
> ... but we still need to know here if any of our parents are throttled
> as well, ie a "throttled_pending_count", or to keep the "throttled"
> parameter tracking in pnt_fair. (ie just replace the implementation of
> cfs_rq_throttled_loose).
>
Hm, good point. We should be good with reinstoring the throttled parameter
and feeding it a ->throttle_pending accumulator.
>> /*
>> * TODO: you'd want to factor out pick_eevdf to just take
>> * tasks_timeline, and replace this list with a second rbtree
>> @@ -5791,8 +5791,12 @@ static bool throttle_cfs_rq(struct cfs_rq *cfs_rq)
>> * We don't actually throttle, though account() will have made sure to
>> * resched us so that we pick into a kernel task.
>> */
>> - if (cfs_rq->h_kernel_running)
>> + if (cfs_rq->h_kernel_running) {
>> + cfs_rq->throttle_pending = true;
>> return false;
>> + }
>> +
>> + cfs_rq->throttle_pending = false;
>
> We also need to clear throttle_pending if quota refills and our
> runtime_remaining goes positive. (And do the appropriate h_* accounting in
> patch 4/5)
Right, so we could move the throttle_pending logic to after
__assign_cfs_rq_runtime(), and then modify distribute_cfs_runtime() to
catch the !throttled but throttle_pending case.
Powered by blists - more mailing lists