[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y2EkXYqZ15/Kjl6H@chenyu5-mobl1>
Date: Tue, 1 Nov 2022 21:51:25 +0800
From: Chen Yu <yu.c.chen@...el.com>
To: Peter Zijlstra <peterz@...radead.org>
CC: Tianchen Ding <dtcccc@...ux.alibaba.com>,
Ingo Molnar <mingo@...hat.com>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>,
"Mel Gorman" <mgorman@...e.de>,
Daniel Bristot de Oliveira <bristot@...hat.com>,
Valentin Schneider <vschneid@...hat.com>,
<linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] sched: Clear ttwu_pending after enqueue_task
On 2022-11-01 at 11:34:04 +0100, Peter Zijlstra wrote:
> On Tue, Nov 01, 2022 at 03:36:30PM +0800, Tianchen Ding wrote:
> > We found a long tail latency in schbench whem m*t is close to nr_cpus.
> > (e.g., "schbench -m 2 -t 16" on a machine with 32 cpus.)
> >
> > This is because when the wakee cpu is idle, rq->ttwu_pending is cleared
> > too early, and idle_cpu() will return true until the wakee task enqueued.
> > This will mislead the waker when selecting idle cpu, and wake multiple
> > worker threads on the same wakee cpu. This situation is enlarged by
> > commit f3dd3f674555 ("sched: Remove the limitation of WF_ON_CPU on
> > wakelist if wakee cpu is idle") because it tends to use wakelist.
> >
> > Here is the result of "schbench -m 2 -t 16" on a VM with 32vcpu
> > (Intel(R) Xeon(R) Platinum 8369B).
> >
> > Latency percentiles (usec):
> > base base+revert_f3dd3f674555 base+this_patch
> > 50.0000th: 9 13 9
> > 75.0000th: 12 19 12
> > 90.0000th: 15 22 15
> > 95.0000th: 18 24 17
> > *99.0000th: 27 31 24
> > 99.5000th: 3364 33 27
> > 99.9000th: 12560 36 30
>
> Nice; but have you also ran other benchmarks and confirmed it doesn't
> negatively affect those?
>
> If so; mentioning that is very helpful. If not; best go do so :-)
>
> > Signed-off-by: Tianchen Ding <dtcccc@...ux.alibaba.com>
> > ---
> > kernel/sched/core.c | 8 +-------
> > 1 file changed, 1 insertion(+), 7 deletions(-)
> >
> > diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> > index 87c9cdf37a26..b07de1753be5 100644
> > --- a/kernel/sched/core.c
> > +++ b/kernel/sched/core.c
> > @@ -3739,13 +3739,6 @@ void sched_ttwu_pending(void *arg)
> > if (!llist)
> > return;
> >
> > - /*
> > - * rq::ttwu_pending racy indication of out-standing wakeups.
> > - * Races such that false-negatives are possible, since they
> > - * are shorter lived that false-positives would be.
> > - */
> > - WRITE_ONCE(rq->ttwu_pending, 0);
> > -
> > rq_lock_irqsave(rq, &rf);
> > update_rq_clock(rq);
> >
>
> Could you try the below instead? Also note the comment; since you did
> the work to figure out why -- best record that for posterity.
>
> @@ -3737,6 +3730,13 @@ void sched_ttwu_pending(void *arg)
> set_task_cpu(p, cpu_of(rq));
>
> ttwu_do_activate(rq, p, p->sched_remote_wakeup ? WF_MIGRATED : 0, &rf);
> + /*
> + * Must be after enqueueing at least once task such that
> + * idle_cpu() does not observe a false-negative -- if it does,
> + * it is possible for select_idle_siblings() to stack a number
> + * of tasks on this CPU during that window.
> + */
> + WRITE_ONCE(rq->ttwu_pending, 0);
Just curious why do we put above code inside llist_for_each_entry_safe loop?
My understanding is that once 1 task is queued, select_idle_cpu() would not
treat this rq as idle anymore because nr_running is not 0. But would this bring
overhead to write the rq->ttwu_pending multiple times, do I miss something?
thanks,
Chenyu
> }
>
> rq_unlock_irqrestore(rq, &rf);
Powered by blists - more mailing lists