lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 10 May 2016 07:29:50 -0700
From:	Eric Dumazet <eric.dumazet@...il.com>
To:	Paolo Abeni <pabeni@...hat.com>
Cc:	netdev@...r.kernel.org, "David S. Miller" <davem@...emloft.net>,
	Eric Dumazet <edumazet@...gle.com>,
	Jiri Pirko <jiri@...lanox.com>,
	Daniel Borkmann <daniel@...earbox.net>,
	Alexei Starovoitov <ast@...mgrid.com>,
	Alexander Duyck <aduyck@...antis.com>,
	Tom Herbert <tom@...bertland.com>,
	Peter Zijlstra <peterz@...radead.org>,
	Ingo Molnar <mingo@...nel.org>, Rik van Riel <riel@...hat.com>,
	Hannes Frederic Sowa <hannes@...essinduktion.org>,
	linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH 0/2] net: threadable napi poll loop

On Tue, 2016-05-10 at 16:11 +0200, Paolo Abeni wrote:
> Currently, the softirq loop can be scheduled both inside the ksofirqd kernel
> thread and inside any running process. This makes nearly impossible for the
> process scheduler to balance in a fair way the amount of time that
> a given core spends performing the softirq loop.
> 
> Under high network load, the softirq loop can take nearly 100% of a given CPU,
> leaving very little time for use space processing. On single core hosts, this
> means that the user space can nearly starve; for example super_netperf
> UDP_STREAM tests towards a remote single core vCPU guest[1] can measure an
> aggregated throughput of a few thousands pps, and the same behavior can be
> reproduced even on bare-metal, eventually simulating a single core with taskset
> and/or sysfs configuration.

I hate these patches and ideas guys, sorry. That is before my breakfast,
but still...

I have enough hard time dealing with loads where ksoftirqd has to
compete with user threads that thought that playing with priorities was
a nice idea.

Guess what, when they lose networking they complain.

We already have ksoftirqd to normally cope with the case you are
describing.

If it is not working as intended, please identify the bugs and fix them,
instead of adding yet another tests in fast path and extra complexity in
the stack.

In the one vcpu case, allowing the user thread to consume more UDP
packets from the target UDP socket will also make your NIC drop more
packets, that are not necessarily packets for the same socket.

So you are shifting the attack to a different target,
at the expense of more kernel bloat.



Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ