lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1462960104.4444.37.camel@redhat.com>
Date:	Wed, 11 May 2016 11:48:24 +0200
From:	Paolo Abeni <pabeni@...hat.com>
To:	Eric Dumazet <eric.dumazet@...il.com>
Cc:	Hannes Frederic Sowa <hannes@...essinduktion.org>,
	Eric Dumazet <edumazet@...gle.com>,
	netdev <netdev@...r.kernel.org>,
	"David S. Miller" <davem@...emloft.net>,
	Jiri Pirko <jiri@...lanox.com>,
	Daniel Borkmann <daniel@...earbox.net>,
	Alexei Starovoitov <ast@...mgrid.com>,
	Alexander Duyck <aduyck@...antis.com>,
	Tom Herbert <tom@...bertland.com>,
	Peter Zijlstra <peterz@...radead.org>,
	Ingo Molnar <mingo@...nel.org>, Rik van Riel <riel@...hat.com>,
	LKML <linux-kernel@...r.kernel.org>
Subject: Re: [RFC PATCH 0/2] net: threadable napi poll loop

Hi Eric,
On Tue, 2016-05-10 at 15:51 -0700, Eric Dumazet wrote:
> On Wed, 2016-05-11 at 00:32 +0200, Hannes Frederic Sowa wrote:
> 
> > Not only did we want to present this solely as a bugfix but also as as
> > performance enhancements in case of virtio (as you can see in the cover
> > letter). Given that a long time ago there was a tendency to remove
> > softirqs completely, we thought it might be very interesting, that a
> > threaded napi in general seems to be absolutely viable nowadays and
> > might offer new features.
> 
> Well, you did not fix the bug, you worked around by adding yet another
> layer, with another sysctl that admins or programs have to manage.
> 
> If you have a special need for virtio, do not hide it behind a 'bug fix'
> but add it as a features request.
> 
> This ksoftirqd issue is real and a fix looks very reasonable.
> 
> Please try this patch, as I had very good success with it.

Thank you for your time and your effort.

I tested your patch on the bare metal "single core" scenario, disabling
the unneeded cores with:
CPUS=`nproc`
for I in `seq 1 $CPUS`; do echo 0  >  /sys/devices/system/node/node0/cpu$I/online; done

And I got a:

[   86.925249] Broke affinity for irq <num>

for each irq number generated by a network device.

In this scenario, your patch solves the ksoftirqd issue, performing
comparable to the napi threaded patches (with a negative delta in the
noise range) and introducing a minor regression with a single flow, in
the noise range (3%).

As said in a previous mail, we actually experimented something similar,
but it felt quite hackish.

AFAICS this patch adds three more tests in the fast path and affect all
other softirq use case. I'm not sure how to check for regression there.

The napi thread patches are actually a new feature, that also fixes the
ksoftirqd issue: hunting the ksoftirqd issue has been the initial
trigger for this work. I'm sorry for not being clear enough in the cover
letter.

The napi thread patches offer additional benefits, i.e. an additional
relevant gain in the described test scenario, and do not impact on other
subsystems/kernel entities. 

I still think they are worthy, and I bet you would disagree, but could
you please articulate more which parts concern you most and/or are more
bloated ?

Thank you,

Paolo


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ