lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 16 Jun 2016 14:03:55 +0200
From:	Paolo Abeni <pabeni@...hat.com>
To:	Eric Dumazet <edumazet@...gle.com>
Cc:	LKML <linux-kernel@...r.kernel.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	"David S. Miller" <davem@...emloft.net>,
	Steven Rostedt <rostedt@...dmis.org>,
	"Peter Zijlstra (Intel)" <peterz@...radead.org>,
	Ingo Molnar <mingo@...nel.org>,
	Hannes Frederic Sowa <hannes@...essinduktion.org>,
	netdev <netdev@...r.kernel.org>
Subject: Re: [PATCH 4/5] netdev: implement infrastructure for threadable
 napi irq

On Thu, 2016-06-16 at 04:19 -0700, Eric Dumazet wrote:
> On Thu, Jun 16, 2016 at 3:39 AM, Paolo Abeni <pabeni@...hat.com> wrote:
> > We used a different setup to explicitly avoid the (guest) userspace
> > starvation issue. Using a guest with 2vCPUs (or more) and a single queue
> > avoids the starvation issue, because the scheduler moves the user space
> > processes on a different vCPU in respect to the ksoftirqd thread.
> >
> > In the hypervisor, with a vanilla kernel, the qemu process receives a
> > fair share of the cpu time, but considerably less 100%, and his
> > performances are bounded to a considerable lower throughput than the
> > theoretical one.
> >
> 
> Completely different setup than last time. I am kind of lost.
> 
> Are you trying to find the optimal way to demonstrate your patch can be useful ?
> 
> In a case with 2 vcpus, then the _standard_ kernel will migrate the
> user thread on the cpu not used by the IRQ,
> once process scheduler can see two threads competing on one cpu
> (ksoftirqd and the user thread), and the other cpu being idle.
> 
> Trying to shift the IRQ 'thread' is not nice, since the hardware IRQ
> will be delivered on the wrong cpu.
> 
> Unless user space forces cpu pinning ? Then tell the user it should not.
> 
> The natural choice is to put both producer and consumer on same cpu
> for cache locality reasons (wake affine),
> but in stress mode allow to run the consumer on another cpu if available.
> 
> If the process scheduler fails to migrate the producer, then there is
> a bug needing to be fixed.

I guess you means 'consumer' here. The scheduler doesn't fail to migrate
it: the consumer is actually migrated a lot of times, but on each cpu a
competing and running ksoftirqd thread is found.

The general problem is that under significant network load (not
necessary udp flood, similar behavior is observed even with TCP_RR
tests), with enough rx queue available and enough flows running, no
single thread/process can use 100% of any cpu, even if the overall
capacity would allow it.

Paolo

Powered by blists - more mailing lists