lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKfTPtDktpTB7d6qhmcX0HtryezzFygk4kOC22Qf=OM77QpLYg@mail.gmail.com>
Date:   Thu, 30 Apr 2020 09:44:19 +0200
From:   Vincent Guittot <vincent.guittot@...aro.org>
To:     Valentin Schneider <valentin.schneider@....com>
Cc:     Scott Wood <swood@...hat.com>,
        Steven Rostedt <rostedt@...dmis.org>,
        Ingo Molnar <mingo@...hat.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Dietmar Eggemann <dietmar.eggemann@....com>,
        Rik van Riel <riel@...riel.com>,
        Mel Gorman <mgorman@...e.de>,
        linux-kernel <linux-kernel@...r.kernel.org>,
        linux-rt-users <linux-rt-users@...r.kernel.org>
Subject: Re: [RFC PATCH 0/3] newidle_balance() latency mitigation

On Thu, 30 Apr 2020 at 01:13, Valentin Schneider
<valentin.schneider@....com> wrote:
>
>
> On 28/04/20 06:02, Scott Wood wrote:
> > These patches mitigate latency caused by newidle_balance() on large
> > systems, by enabling interrupts when the lock is dropped, and exiting
> > early at various points if an RT task is runnable on the current CPU.
> >
> > When applied to an RT kernel on a 72-core machine (2 threads per core), I
> > saw significant reductions in latency as reported by rteval -- from
> > over 500us to around 160us with hyperthreading disabled, and from
> > over 1400us to around 380us with hyperthreading enabled.
> >
> > This isn't the first time something like this has been tried:
> > https://lore.kernel.org/lkml/20121222003019.433916240@goodmis.org/
> > That attempt ended up being reverted:
> > https://lore.kernel.org/lkml/5122CD9C.9070702@oracle.com/
> >
> > The problem in that case was the failure to keep BH disabled, and the
> > difficulty of fixing that when called from the post_schedule() hook.
> > This patchset uses finish_task_switch() to call newidle_balance(), which
> > enters in non-atomic context so we have full control over what we disable
> > and when.
> >
> > There was a note at the end about wanting further discussion on the matter --
> > does anyone remember if that ever happened and what the conclusion was?
> > Are there any other issues with enabling interrupts here and/or moving
> > the newidle_balance() call?
> >
>
> Random thought that just occurred to me; in the grand scheme of things,
> with something in the same spirit as task-stealing (i.e. don't bother with
> a full fledged balance at newidle, just pick one spare task somewhere),
> none of this would be required.

newly idle load balance already stops after picking 1 task
Now if your proposal is to pick one random task on one random cpu, I'm
clearly not sure that's a good idea


>
> Sadly I don't think anyone has been looking at it any recently.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ