[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKfTPtArC+A+=TtzK5igMiwCaq-K6_oTyQX4k6-oaJJz-91OAA@mail.gmail.com>
Date: Tue, 21 Mar 2023 12:12:07 +0100
From: Vincent Guittot <vincent.guittot@...aro.org>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Dietmar Eggemann <dietmar.eggemann@....com>, mingo@...hat.com,
juri.lelli@...hat.com, rostedt@...dmis.org, bsegall@...gle.com,
mgorman@...e.de, bristot@...hat.com, vschneid@...hat.com,
linux-kernel@...r.kernel.org, zhangqiao22@...wei.com
Subject: Re: [PATCH v2] sched/fair: sanitize vruntime of entity being migrated
On Tue, 21 Mar 2023 at 11:50, Peter Zijlstra <peterz@...radead.org> wrote:
>
> On Tue, Mar 21, 2023 at 11:29:13AM +0100, Dietmar Eggemann wrote:
> > On 21/03/2023 11:02, Peter Zijlstra wrote:
> > > On Fri, Mar 17, 2023 at 05:08:10PM +0100, Vincent Guittot wrote:
> > >> Commit 829c1651e9c4 ("sched/fair: sanitize vruntime of entity being placed")
> > >> fixes an overflowing bug, but ignore a case that se->exec_start is reset
> > >> after a migration.
> > >>
> > >> For fixing this case, we delay the reset of se->exec_start after
> > >> placing the entity which se->exec_start to detect long sleeping task.
> > >>
> > >> In order to take into account a possible divergence between the clock_task
> > >> of 2 rqs, we increase the threshold to around 104 days.
> > >>
> > >>
> > >> Fixes: 829c1651e9c4 ("sched/fair: sanitize vruntime of entity being placed")
> > >> Signed-off-by: Zhang Qiao <zhangqiao22@...wei.com>
> > >> Signed-off-by: Vincent Guittot <vincent.guittot@...aro.org>
> > >> ---
> > >
> > > Blergh, this just isn't going to be nice. I'll go queue this for
> > > sched/urgent and then we can forget about this for a little while.
> > >
> > > Thanks!
> >
> > Don't we miss setting `se->exec_start = 0` for fair task in
> > move_queued_task()? ( ... and __migrate_swap_task())
> >
> > https://lkml.kernel.org/r/df2cccda-1550-b06b-aa74-e0f054e9fb9d@arm.com
>
> Ah, I see what you mean now... When I read your and Vincent's replies
> earlier today I though you mean to avoid the extra ENQUEUE_MIGRATED use,
> but your actual goal was to capure more sites.
>
> Hmm, we could of course go add more ENQUEUE_MIGRATED, but you're right
> in that TASK_ON_RQ_MIGRATING already captures that.
>
> An alternative is something like the below, that matches
> deactivate_task(), but still uses ENQUEUE_MIGRATED to pass it down into
> the class methods.
>
> Hmm?
Yes, this seems to be the right way to set ENQUEUE_MIGRATED flags
>
>
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -2084,6 +2084,9 @@ static inline void dequeue_task(struct r
>
> void activate_task(struct rq *rq, struct task_struct *p, int flags)
> {
> + if (task_on_rq_migrating(p))
> + flags |= ENQUEUE_MIGRATED;
> +
> enqueue_task(rq, p, flags);
>
> p->on_rq = TASK_ON_RQ_QUEUED;
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -8726,7 +8726,7 @@ static void attach_task(struct rq *rq, s
> lockdep_assert_rq_held(rq);
>
> WARN_ON_ONCE(task_rq(p) != rq);
> - activate_task(rq, p, ENQUEUE_NOCLOCK | ENQUEUE_MIGRATED);
> + activate_task(rq, p, ENQUEUE_NOCLOCK);
> check_preempt_curr(rq, p, 0);
> }
>
Powered by blists - more mailing lists