lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 18 Oct 2018 11:48:50 +0200
From:   Peter Zijlstra <peterz@...radead.org>
To:     Juri Lelli <juri.lelli@...hat.com>
Cc:     Thomas Gleixner <tglx@...utronix.de>,
        Juri Lelli <juri.lelli@...il.com>,
        syzbot <syzbot+385468161961cee80c31@...kaller.appspotmail.com>,
        Borislav Petkov <bp@...en8.de>,
        "H. Peter Anvin" <hpa@...or.com>,
        LKML <linux-kernel@...r.kernel.org>, mingo@...hat.com,
        nstange@...e.de, syzkaller-bugs@...glegroups.com,
        Luca Abeni <luca.abeni@...tannapisa.it>, henrik@...tad.us,
        Tommaso Cucinotta <tommaso.cucinotta@...tannapisa.it>,
        Claudio Scordino <claudio@...dence.eu.com>,
        Daniel Bristot de Oliveira <bristot@...hat.com>
Subject: Re: INFO: rcu detected stall in do_idle

On Thu, Oct 18, 2018 at 10:28:38AM +0200, Juri Lelli wrote:

> Another side problem seems also to be that with such tiny parameters we
> spend lot of time in the while (dl_se->runtime <= 0) loop of replenish_dl_
> entity() (actually uselessly, as deadline is most probably going to
> still be in the past when eventually runtime becomes positive again), as
> delta_exec is huge w.r.t. runtime and runtime has to keep up with tiny
> increments of dl_runtime. I guess we could ameliorate things here by
> limiting the number of time we execute the loop before bailing out.

That's the "DL replenish lagged too much" case, right? Yeah, there is
only so much we can recover from.

Funny that GCC actually emits that loop; sometimes we've had to fight
GCC not to turn that into a division.

But yes, I suppose we can put a limit on how many periods we can lag
before just giving up.

> So, I tend to think that we might want to play safe and put some higher
> minimum value for dl_runtime (it's currently at 1ULL << DL_SCALE).
> Guess the problem is to pick a reasonable value, though. Maybe link it
> someway to HZ? Then we might add a sysctl (or similar) thing with which
> knowledgeable users can do whatever they think their platform/config can
> support?

Yes, a HZ related limit sounds like something we'd want. But if we're
going to do a minimum sysctl, we should also consider adding a maximum,
if you set a massive period/deadline, you can, even with a relatively
low u, incur significant delays.

And do we want to put the limit on runtime or on period ?

That is, something like:

  TICK_NSEC/2 < period < 10*TICK_NSEC

and/or

  TICK_NSEC/2 < runtime < 10*TICK_NSEC

Hmm, for HZ=1000 that ends up with a max period of 10ms, that's far too
low, 24Hz needs ~41ms. We can of course also limit the runtime by
capping u for users (as we should anyway).



Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ