[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKfTPtCRM8Eo+yrfFkjYPnJziXPYSfYLtcn23pEoiBjdz9WAZQ@mail.gmail.com>
Date: Tue, 21 Jan 2025 10:53:42 +0100
From: Vincent Guittot <vincent.guittot@...aro.org>
To: K Prateek Nayak <kprateek.nayak@....com>
Cc: Ingo Molnar <mingo@...hat.com>, Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>, linux-kernel@...r.kernel.org,
Dietmar Eggemann <dietmar.eggemann@....com>, Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
Valentin Schneider <vschneid@...hat.com>, "Gautham R. Shenoy" <gautham.shenoy@....com>,
Swapnil Sapkal <swapnil.sapkal@....com>
Subject: Re: [PATCH] sched/fair: Fix inaccurate h_nr_runnable accounting with
delayed dequeue
On Tue, 21 Jan 2025 at 09:09, K Prateek Nayak <kprateek.nayak@....com> wrote:
>
> Hello Vincent,
>
> On 1/18/2025 2:00 PM, Vincent Guittot wrote:
> > On Fri, 17 Jan 2025 at 16:59, K Prateek Nayak <kprateek.nayak@....com> wrote:
> >>
> >> Hello Vincent,
> >>
> >> On 1/17/2025 6:55 PM, Vincent Guittot wrote:
> >>> Hi Prateek,
> >>>
> >>> On Fri, 17 Jan 2025 at 11:59, K Prateek Nayak <kprateek.nayak@....com> wrote:
> >>>>
> >>>> set_delayed() adjusts cfs_rq->h_nr_runnable for the hierarchy when an
> >>>> entity is delayed irrespective of whether the entity corresponds to a
> >>>> task or a cfs_rq.
> >>>>
> >>>> Consider the following scenario:
> >>>>
> >>>> root
> >>>> / \
> >>>> A B (*) delayed since B is no longer eligible on root
> >>>> | |
> >>>> Task0 Task1 <--- dequeue_task_fair() - task blocks
> >>>>
> >>>> When Task1 blocks (dequeue_entity() for task's se returns true),
> >>>> dequeue_entities() will continue adjusting cfs_rq->h_nr_* for the
> >>>> hierarchy of Task1. However, when the sched_entity corresponding to
> >>>> cfs_rq B is delayed, set_delayed() will adjust the h_nr_runnable for the
> >>>> hierarchy too leading to both dequeue_entity() and set_delayed()
> >>>> decrementing h_nr_runnable for the dequeue of the same task.
> >>>>
> >>>> A SCHED_WARN_ON() to inspect h_nr_runnable post its update in
> >>>> dequeue_entities() like below:
> >>>>
> >>>> cfs_rq->h_nr_runnable -= h_nr_runnable;
> >>>> SCHED_WARN_ON(((int) cfs_rq->h_nr_runnable) < 0);
> >>>>
> >>>> is consistently tripped when running wakeup intensive workloads like
> >>>> hackbench in a cgroup.
> >>>>
> >>>> This error is self correcting since cfs_rq are per-cpu and cannot
> >>>> migrate. The entitiy is either picked for full dequeue or is requeued
> >>>> when a task wakes up below it. Both those paths call clear_delayed()
> >>>> which again increments h_nr_runnable of the hierarchy without
> >>>> considering if the entity corresponds to a task or not.
> >>>>
> >>>> h_nr_runnable will eventually reflect the correct value however in the
> >>>> interim, the incorrect values can still influence PELT calculation which
> >>>> uses se->runnable_weight or cfs_rq->h_nr_runnable.
> >>>>
> >>>> Since only delayed tasks take the early return path in
> >>>> dequeue_entities() and enqueue_task_fair(), adjust the
> >>>> h_nr_runnable in {set,clear}_delayed() only when a task is delayed as
> >>>> this path skips the h_nr_* update loops and returns early.
> >>>>
> >>>> For entities corresponding to cfs_rq, the h_nr_* update loop in the
> >>>> caller will do the right thing.
> >>>>
> >>>> Fixes: 76f2f783294d ("sched/eevdf: More PELT vs DELAYED_DEQUEUE")
> >>>
> >>> You probably mean c2a295bffeaf ("sched/fair: Add new cfs_rq.h_nr_runnable")
> >>
> >> You are right! I had done a git blame on set_delayed() ad landed at
> >> commit 76f2f783294d but you are right, it should be c2a295bffeaf
> >> ("sched/fair: Add new cfs_rq.h_nr_runnable") when the accounting was
> >> inverted to account runnable tasks. Thank you for pointing that out.
> >>
> >>> Before we were tracking the opposite h_nr_delayed. Did you see the
> >>> problem only on tip/sched/core or also before the rework which added
> >>> h_nr_runnable and removed h_nr_delayed
> >>
> >> The problem is on tip:sched/core. I did not encounter any anomalies on
> >> 76f2f783294d ("sched/eevdf: More PELT vs DELAYED_DEQUEUE")
> >>
> >> "h_nr_delayed" was only adjusted in dequeue_entities() for "!seep &&
> >> !delayed" which would imply migration or a save + restore type operation
> >> and the whole "h_nr_delayed" adjusting was contained in
> >> {set,clear}_delayed() for delayed dequeue, finish delayed dequeue, and
> >> requeue.
>
> So I was looking at it wrong when I was investigating on commit
> 76f2f783294d ("sched/eevdf: More PELT vs DELAYED_DEQUEUE") h_nr_delayed
> can never be larger than h_nr_running (h_nr_queued upstream) since the
> number of delayed tasks can never cross number of tasks queued below
> the given cfs_rq but with the following:
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 97ee48c8bf5e..8e713f241483 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -7145,6 +7145,7 @@ static int dequeue_entities(struct rq *rq, struct sched_entity *se, int flags)
> cfs_rq->h_nr_running -= h_nr_running;
> cfs_rq->idle_h_nr_running -= idle_h_nr_running;
> cfs_rq->h_nr_delayed -= h_nr_delayed;
> + SCHED_WARN_ON(cfs_rq->h_nr_delayed > cfs_rq->h_nr_running);
>
> if (cfs_rq_is_idle(cfs_rq))
> idle_h_nr_running = h_nr_running;
> @@ -7185,6 +7186,8 @@ static int dequeue_entities(struct rq *rq, struct sched_entity *se, int flags)
> cfs_rq->idle_h_nr_running -= idle_h_nr_running;
> cfs_rq->h_nr_delayed -= h_nr_delayed;
>
> + SCHED_WARN_ON(cfs_rq->h_nr_delayed > cfs_rq->h_nr_running);
> +
> if (cfs_rq_is_idle(cfs_rq))
> idle_h_nr_running = h_nr_running;
>
> --
>
> I can again consistently hit the warning without the fix on 76f2f783294d
> ("sched/eevdf: More PELT vs DELAYED_DEQUEUE")
I updated my warning conditions with
h_nr_queued != h_nr_delayed + h_nr_runnable
h_nr_delayed < 0 or > h_nr_queued
h_nr_runnable < 0 or > h_nr_queued
In addition to h_nr_runnable < 0, h_nr_delayed > h_nr_queued has also
triggered a warning so I confirm that the fix is on 76f2f783294d
("sched/eevdf: More PELT vs DELAYED_DEQUEUE")
No warning are triggered with your fix
>
> I think that the original "Fixes:" tag is indeed right.
>
> >>
> >>>
> >>> I'm going to have a closer look
> >
> > Your fix looks good to me. I also run some tests after re-adding
> > h_nr_delayed and checking that h_nr_queued = h_nr_runnable +
> > h_nr_delayed after each update and I didn't get any warning with your
> > patch whereas I got one during boot without it (but none after that
> > during my tests)
>
> Could it be the case that h_nr_delayed counts a tiny bit higher than
> the actual number and h_nr_runnable counts a tiny bit lower by the
> same amount and they both correct each other to give the correct
> h_nr_queued?
>
> >
> > Thanks for catching this
> >
> > Reviewed-by: Vincent Guittot <vincent.guittot@...aro.org>
>
> Thank you for reviewing the patch!
>
> >
> >>
> >> Thank you!
> >>
> >>>
> >>>
> >>>> Reviewed-by: Gautham R. Shenoy <gautham.shenoy@....com>
> >>>> Tested-by: Swapnil Sapkal <swapnil.sapkal@....com>
> >>>> Signed-off-by: K Prateek Nayak <kprateek.nayak@....com>
> >>>> ---
> >>>>
> >>>> [..snip..]
> >>>>
> >>
> >> --
> >> Thanks and Regards,
> >> Prateek
> >>
>
> --
> Thanks and Regards,
> Prateek
>
Powered by blists - more mailing lists