[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1466008955.24431.35.camel@redhat.com>
Date: Wed, 15 Jun 2016 18:42:35 +0200
From: Paolo Abeni <pabeni@...hat.com>
To: Eric Dumazet <edumazet@...gle.com>
Cc: LKML <linux-kernel@...r.kernel.org>,
Thomas Gleixner <tglx@...utronix.de>,
"David S. Miller" <davem@...emloft.net>,
Steven Rostedt <rostedt@...dmis.org>,
"Peter Zijlstra (Intel)" <peterz@...radead.org>,
Ingo Molnar <mingo@...nel.org>,
Hannes Frederic Sowa <hannes@...essinduktion.org>,
netdev <netdev@...r.kernel.org>
Subject: Re: [PATCH 4/5] netdev: implement infrastructure for threadable
napi irq
On Wed, 2016-06-15 at 07:17 -0700, Eric Dumazet wrote:
> On Wed, Jun 15, 2016 at 6:42 AM, Paolo Abeni <pabeni@...hat.com> wrote:
> > This commit adds the infrastructure needed for threadable
> > rx interrupt. A reference to the irq thread is used to
> > mark the threaded irq mode.
> > In threaded mode the poll loop is invoked directly from
> > __napi_schedule().
> > napi drivers which want to support threadable irq interrupts
> > must provide an irq mode change handler which actually set
> > napi->thread and register it after requesting the irq.
> >
> > Signed-off-by: Paolo Abeni <pabeni@...hat.com>
> > Signed-off-by: Hannes Frederic Sowa <hannes@...essinduktion.org>
> > ---
> > include/linux/netdevice.h | 4 ++++
> > net/core/dev.c | 59 +++++++++++++++++++++++++++++++++++++++++++++++
> > 2 files changed, 63 insertions(+)
> >
>
> I really appreciate the effort, but as I already said this is not going to work.
>
> Many NIC have 2 NAPI contexts per queue, one for TX, one for RX.
>
> Relying on CFS to switch from the two 'threads' you need in the one
> vCPU case will add latencies that your 'pure throughput UDP flood' is
> not able to detect.
We have done TCP_RR tests with similar results: when the throughput is
(guest) cpu bounded and multiple flows are used, there is measurable
gain.
> I was waiting a fix from Andy Lutomirski to be merged before sending
> my ksoftirqd fix, which will work and wont bring kernel bloat.
We experimented that patch in this scenario, but it don't give
measurable gain, since the ksoftirqd threads still prevent the qemu
process from using 100% of any hypervisor's cores.
Paolo
Powered by blists - more mailing lists