[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANn89iKid-JWYs6esRYo25NqVdLkLvn6uwiB7wLz_PXuREQQKA@mail.gmail.com>
Date: Sat, 2 May 2020 08:40:58 -0700
From: Eric Dumazet <edumazet@...gle.com>
To: Julian Wiedmann <jwi@...ux.ibm.com>
Cc: "David S . Miller" <davem@...emloft.net>,
netdev <netdev@...r.kernel.org>, Luigi Rizzo <lrizzo@...gle.com>,
Eric Dumazet <eric.dumazet@...il.com>
Subject: Re: [PATCH net-next 1/3] net: napi: add hard irqs deferral feature
On Sat, May 2, 2020 at 7:56 AM Julian Wiedmann <jwi@...ux.ibm.com> wrote:
>
> On 22.04.20 18:13, Eric Dumazet wrote:
> > Back in commit 3b47d30396ba ("net: gro: add a per device gro flush timer")
> > we added the ability to arm one high resolution timer, that we used
> > to keep not-complete packets in GRO engine a bit longer, hoping that further
> > frames might be added to them.
> >
> > Since then, we added the napi_complete_done() interface, and commit
> > 364b6055738b ("net: busy-poll: return busypolling status to drivers")
> > allowed drivers to avoid re-arming NIC interrupts if we made a promise
> > that their NAPI poll() handler would be called in the near future.
> >
> > This infrastructure can be leveraged, thanks to a new device parameter,
> > which allows to arm the napi hrtimer, instead of re-arming the device
> > hard IRQ.
> >
> > We have noticed that on some servers with 32 RX queues or more, the chit-chat
> > between the NIC and the host caused by IRQ delivery and re-arming could hurt
> > throughput by ~20% on 100Gbit NIC.
> >
> > In contrast, hrtimers are using local (percpu) resources and might have lower
> > cost.
> >
> > The new tunable, named napi_defer_hard_irqs, is placed in the same hierarchy
> > than gro_flush_timeout (/sys/class/net/ethX/)
> >
>
> Hi Eric,
> could you please add some Documentation for this new sysfs tunable? Thanks!
> Looks like gro_flush_timeout is missing the same :).
Yes. I was planning adding this in
Documentation/networking/scaling.rst, once our fires are extinguished.
>
>
> > By default, both gro_flush_timeout and napi_defer_hard_irqs are zero.
> >
> > This patch does not change the prior behavior of gro_flush_timeout
> > if used alone : NIC hard irqs should be rearmed as before.
> >
> > One concrete usage can be :
> >
> > echo 20000 >/sys/class/net/eth1/gro_flush_timeout
> > echo 10 >/sys/class/net/eth1/napi_defer_hard_irqs
> >
> > If at least one packet is retired, then we will reset napi counter
> > to 10 (napi_defer_hard_irqs), ensuring at least 10 periodic scans
> > of the queue.
> >
> > On busy queues, this should avoid NIC hard IRQ, while before this patch IRQ
> > avoidance was only possible if napi->poll() was exhausting its budget
> > and not call napi_complete_done().
> >
>
> I was confused here for a second, so let me just clarify how this is intended
> to look like for pure TX completion IRQs:
>
> napi->poll() calls napi_complete_done() with an accurate work_done value, but
> then still returns 0 because TX completion work doesn't consume NAPI budget.
If the napi budget was consumed, the driver does _not_ call
napi_complete() or napi_complete_done() anyway.
If the budget is consumed, then napi_complete_done(napi, X>0) allows
napi_complete_done()
to return 0 if napi_defer_hard_irqs is not 0
This means that the NIC hard irq will stay disabled for at least one more round.
>
>
> > This feature also can be used to work around some non-optimal NIC irq
> > coalescing strategies.
> >
> > Having the ability to insert XX usec delays between each napi->poll()
> > can increase cache efficiency, since we increase batch sizes.
> >
> > It also keeps serving cpus not idle too long, reducing tail latencies.
> >
> > Co-developed-by: Luigi Rizzo <lrizzo@...gle.com>
> > Signed-off-by: Eric Dumazet <edumazet@...gle.com>
Powered by blists - more mailing lists