lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180118040916.GA25885@lerouge>
Date:   Thu, 18 Jan 2018 05:09:24 +0100
From:   Frederic Weisbecker <frederic@...nel.org>
To:     Linus Torvalds <torvalds@...ux-foundation.org>
Cc:     Mauro Carvalho Chehab <mchehab@...pensource.com>,
        LKML <linux-kernel@...r.kernel.org>,
        Levin Alexander <alexander.levin@...izon.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Hannes Frederic Sowa <hannes@...essinduktion.org>,
        "Paul E . McKenney" <paulmck@...ux.vnet.ibm.com>,
        Wanpeng Li <wanpeng.li@...mail.com>,
        Dmitry Safonov <dima@...sta.com>,
        Thomas Gleixner <tglx@...utronix.de>,
        Eric Dumazet <edumazet@...gle.com>,
        Radu Rendec <rrendec@...sta.com>,
        Ingo Molnar <mingo@...nel.org>,
        Stanislaw Gruszka <sgruszka@...hat.com>,
        Paolo Abeni <pabeni@...hat.com>,
        Rik van Riel <riel@...hat.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        David Miller <davem@...emloft.net>
Subject: Re: [RFC PATCH 0/5] softirq: Per vector threading v2

On Wed, Jan 17, 2018 at 07:09:39PM -0800, Linus Torvalds wrote:
> On Wed, Jan 17, 2018 at 6:55 PM, Frederic Weisbecker
> <frederic@...nel.org> wrote:
> >> I thought you were going to actally account for time, but I don't
> >> think you ended up doing that.
> >
> > I did in the first version but then I thought you suggested that count per
> > jiffy. I probably misunderstood :)
> 
> Oh, miscommunication.
> 
> I tried to suggest to do things purely by time (from an accounting
> standpoint), but then to also have some "minimum time" for each
> invocation, so that there effectively ends up being an invocation
> limit too.

Ah ok I see.

> 
> Honestly, that's mainly because I worry about just how good the
> time-based approach might be (ie some hardware doesn't have a good
> high-frequency clock to read etc.
> 
> On x86-64, the TSC would be fairly natural as a clock, but we support
> architectures without anything like that, so time-based definitely has
> some issues.

Yeah indeed, there is that too. Performance and reliability.

> 
> But thinking about it more, I do end up liking my suggested "just keep
> a bitmap of softirqs that have been handled" thing, and kick the
> softirq to a thread if it ever seems to get into that "we already saw
> this one".
> 
> It might just work very naturally, and it sure as hell is simple and
> has no subtle interactions with the granularity of whatever random
> clock the architecture or platform has.
> 
> It should never trigger under any normal load, but I think it *should*
> trigger under the load that the networking people worry about. If you
> get a flood of UDP packets, and spend a lot of time in softirqs, I'm
> pretty sure you'd hit that case of seeing the same softirq re-raised
> fairly naturally and quickly.

Ok after a quick tracing check, it seems that executing the same softirq
vector twice in the same interrupt does not happen without much stress.

I'm trying that solution and we'll see what testing will reveal from
people's boxes.

Thanks.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ