lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20180117.153049.1803664333084879932.davem@davemloft.net>
Date:   Wed, 17 Jan 2018 15:30:49 -0500 (EST)
From:   David Miller <davem@...emloft.net>
To:     efault@....de
Cc:     torvalds@...ux-foundation.org, peterz@...radead.org,
        edumazet@...gle.com, dima@...sta.com, frederic@...nel.org,
        linux-kernel@...r.kernel.org, 0x7f454c46@...il.com,
        akpm@...ux-foundation.org, fweisbec@...il.com,
        hannes@...essinduktion.org, mingo@...nel.org,
        alexander.levin@...izon.com, pabeni@...hat.com,
        paulmck@...ux.vnet.ibm.com, rrendec@...sta.com, riel@...hat.com,
        sgruszka@...hat.com, tglx@...utronix.de, wanpeng.li@...mail.com
Subject: Re: [RFC 1/2] softirq: Defer net rx/tx processing to ksoftirqd
 context

From: Mike Galbraith <efault@....de>
Date: Fri, 12 Jan 2018 19:44:30 +0100

> On Fri, 2018-01-12 at 19:15 +0100, Mike Galbraith wrote:
>> On Fri, 2018-01-12 at 09:51 -0800, Linus Torvalds wrote:
>> > On Fri, Jan 12, 2018 at 9:44 AM, Mike Galbraith <efault@....de> wrote:
>> > >
>> > > Nah, a misunderstanding happened.  RT that still offers full threading
>> > > creates per-softirq threads per cpu.  The regular trees split ksoftirqd
>> > > into only two threads per cpu, one processes timer/hrtimer softriqs,
>> > > the other processes the rest.
>> > 
>> > Ok, that sounds like it should work, but it also sounds like it's very
>> > specific to RT itself.
>> > 
>> > For example, the dvb issue was not about the timer softirqs, but about
>> > the tasklet ones.
>> > 
>> > So maybe we wouldn't need to split it for _every_ softirq, but we'd
>> > need to split it more than just along the timer case.
>> > 
>> > And it does sound a bit excessive to have ten fixed threads for every
>> > CPU. The days when tens of CPU's meant "huge system" are gone. These
>> > days it can be a phone.
>> 
>> Yeah, it is excessive more often than not.  You get to prioritize, and
>> segregate, which is nice, but you pay for it.
> 
> BTW, much of the softirq load in RT is processed by the raising task.
> 
>       tbench_srv-6985  [000] d...112   293.902511: softirq_raise: vec=3 [action=NET_RX]
>       tbench_srv-6985  [000] .....13   293.902511: softirq_entry: vec=3 [action=NET_RX]
>       tbench_srv-6985  [000] .....13   293.902515: softirq_exit: vec=3 [action=NET_RX]
>           tbench-6984  [003] d...112   293.902520: softirq_raise: vec=3 [action=NET_RX]
>           tbench-6984  [003] .....13   293.902520: softirq_entry: vec=3 [action=NET_RX]
>           tbench-6984  [003] .....13   293.902523: softirq_exit: vec=3 [action=NET_RX]

And this is because tbench runs over loopback, which triggers softirqs in
the context of whatever generates the loopback packet transmit.  Which in
this case is the tbench process calling sendmsg().

I wanted to chime in about this earlier, and make it clear that it isn't
just IRQs that can trigger softirqs.  User context actions in the kernel
can trigger softirqs too.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ