lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 9 Jun 2022 15:41:59 +0200
From:   Vincent Guittot <vincent.guittot@...aro.org>
To:     Josh Don <joshdon@...gle.com>
Cc:     Ingo Molnar <mingo@...hat.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Juri Lelli <juri.lelli@...hat.com>,
        Dietmar Eggemann <dietmar.eggemann@....com>,
        Steven Rostedt <rostedt@...dmis.org>,
        Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
        Daniel Bristot de Oliveira <bristot@...hat.com>,
        Valentin Schneider <vschneid@...hat.com>,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH] sched: allow newidle balancing to bail out of load_balance

On Thu, 9 Jun 2022 at 04:55, Josh Don <joshdon@...gle.com> wrote:
>
> While doing newidle load balancing, it is possible for new tasks to
> arrive, such as with pending wakeups. newidle_balance() already accounts
> for this by exiting the sched_domain load_balance() iteration if it
> detects these cases. This is very important for minimizing wakeup
> latency.
>
> However, if we are already in load_balance(), we may stay there for a
> while before returning back to newidle_balance(). This is most
> exacerbated if we enter a 'goto redo' loop in the LBF_ALL_PINNED case. A
> very straightforward workaround to this is to adjust should_we_balance()
> to bail out if we're doing a CPU_NEWLY_IDLE balance and new tasks are
> detected.

This one is close to the other tests and I wonder if it should be
better placed before taking the busiest rq lock and detaching some
tasks.

Beside your use case where all other threads can't move in local cpu
and load_balance() loops and clears other cpus, most of the time is
probably spent in fbg() and fbq() so there are more chance that a task
woke in this meantime and I imagine that it becomes useless to take
lock and move tasks from another cpu if the local cpu is no more newly
idle.

Have you tried other places in load_balance() and does this one
provide the lowest wakeup latency ?

That being said, the current patch makes sense.

>
> This was tested with the following reproduction:
> - two threads that take turns sleeping and waking each other up are
>   affined to two cores
> - a large number of threads with 100% utilization are pinned to all
>   other cores
>
> Without this patch, wakeup latency was ~120us for the pair of threads,
> almost entirely spent in load_balance(). With this patch, wakeup latency
> is ~6us.
>
> Signed-off-by: Josh Don <joshdon@...gle.com>
> ---
>  kernel/sched/fair.c | 8 +++++++-
>  1 file changed, 7 insertions(+), 1 deletion(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 8c5b74f66bd3..5abf30117824 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -9834,9 +9834,15 @@ static int should_we_balance(struct lb_env *env)
>         /*
>          * In the newly idle case, we will allow all the CPUs
>          * to do the newly idle load balance.
> +        *
> +        * However, we bail out if we already have tasks or a wakeup pending,
> +        * to optimize wakeup latency.
>          */
> -       if (env->idle == CPU_NEWLY_IDLE)
> +       if (env->idle == CPU_NEWLY_IDLE) {
> +               if (env->dst_rq->nr_running > 0 || env->dst_rq->ttwu_pending)
> +                       return 0;
>                 return 1;
> +       }
>
>         /* Try to find first idle CPU */
>         for_each_cpu_and(cpu, group_balance_mask(sg), env->cpus) {
> --
> 2.36.1.476.g0c4daa206d-goog
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ