[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y2TGozI0YZQ7BCxc@chenyu5-mobl1>
Date: Fri, 4 Nov 2022 16:00:35 +0800
From: Chen Yu <yu.c.chen@...el.com>
To: Tianchen Ding <dtcccc@...ux.alibaba.com>
CC: Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>,
"Mel Gorman" <mgorman@...e.de>,
Daniel Bristot de Oliveira <bristot@...hat.com>,
Valentin Schneider <vschneid@...hat.com>,
<linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v2] sched: Clear ttwu_pending after enqueue_task
On 2022-11-04 at 10:36:01 +0800, Tianchen Ding wrote:
> We found a long tail latency in schbench whem m*t is close to nr_cpus.
> (e.g., "schbench -m 2 -t 16" on a machine with 32 cpus.)
>
> This is because when the wakee cpu is idle, rq->ttwu_pending is cleared
> too early, and idle_cpu() will return true until the wakee task enqueued.
> This will mislead the waker when selecting idle cpu, and wake multiple
> worker threads on the same wakee cpu. This situation is enlarged by
> commit f3dd3f674555 ("sched: Remove the limitation of WF_ON_CPU on
> wakelist if wakee cpu is idle") because it tends to use wakelist.
>
> Here is the result of "schbench -m 2 -t 16" on a VM with 32vcpu
> (Intel(R) Xeon(R) Platinum 8369B).
>
> Latency percentiles (usec):
> base base+revert_f3dd3f674555 base+this_patch
> 50.0000th: 9 13 9
> 75.0000th: 12 19 12
> 90.0000th: 15 22 15
> 95.0000th: 18 24 17
> *99.0000th: 27 31 24
> 99.5000th: 3364 33 27
> 99.9000th: 12560 36 30
>
> We also tested on unixbench and hackbench, and saw no performance
> change.
>
> Signed-off-by: Tianchen Ding <dtcccc@...ux.alibaba.com>
> ---
> v2:
> Update commit log about other benchmarks.
> Add comment in code.
> Move the code before rq_unlock. This can make ttwu_pending updated a bit
> earlier than v1 so that it can reflect the real condition more timely,
> maybe.
>
> v1: https://lore.kernel.org/all/20221101073630.2797-1-dtcccc@linux.alibaba.com/
> ---
> kernel/sched/core.c | 18 +++++++++++-------
> 1 file changed, 11 insertions(+), 7 deletions(-)
>
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 87c9cdf37a26..7a04b5565389 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -3739,13 +3739,6 @@ void sched_ttwu_pending(void *arg)
> if (!llist)
> return;
>
> - /*
> - * rq::ttwu_pending racy indication of out-standing wakeups.
> - * Races such that false-negatives are possible, since they
> - * are shorter lived that false-positives would be.
> - */
> - WRITE_ONCE(rq->ttwu_pending, 0);
> -
> rq_lock_irqsave(rq, &rf);
> update_rq_clock(rq);
>
> @@ -3759,6 +3752,17 @@ void sched_ttwu_pending(void *arg)
> ttwu_do_activate(rq, p, p->sched_remote_wakeup ? WF_MIGRATED : 0, &rf);
> }
>
> + /*
> + * Must be after enqueueing at least once task such that
> + * idle_cpu() does not observe a false-negative -- if it does,
> + * it is possible for select_idle_siblings() to stack a number
> + * of tasks on this CPU during that window.
> + *
> + * It is ok to clear ttwu_pending when another task pending.
> + * We will receive IPI after local irq enabled and then enqueue it.
> + * Since now nr_running > 0, idle_cpu() will always get correct result.
> + */
> + WRITE_ONCE(rq->ttwu_pending, 0);
> rq_unlock_irqrestore(rq, &rf);
> }
>
Reviewed-by: Chen Yu <yu.c.chen@...el.com>
thanks,
Chenyu
Powered by blists - more mailing lists