[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240819190755.0ed0a959@kernel.org>
Date: Mon, 19 Aug 2024 19:07:55 -0700
From: Jakub Kicinski <kuba@...nel.org>
To: Martin Karsten <mkarsten@...terloo.ca>
Cc: Stanislav Fomichev <sdf@...ichev.me>, netdev@...r.kernel.org, Joe Damato
<jdamato@...tly.com>, amritha.nambiar@...el.com,
sridhar.samudrala@...el.com, Alexander Lobakin
<aleksander.lobakin@...el.com>, Alexander Viro <viro@...iv.linux.org.uk>,
Breno Leitao <leitao@...ian.org>, Christian Brauner <brauner@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>, "David S. Miller"
<davem@...emloft.net>, Eric Dumazet <edumazet@...gle.com>, Jan Kara
<jack@...e.cz>, Jiri Pirko <jiri@...nulli.us>, Johannes Berg
<johannes.berg@...el.com>, Jonathan Corbet <corbet@....net>, "open
list:DOCUMENTATION" <linux-doc@...r.kernel.org>, "open list:FILESYSTEMS
(VFS and infrastructure)" <linux-fsdevel@...r.kernel.org>, open list
<linux-kernel@...r.kernel.org>, Lorenzo Bianconi <lorenzo@...nel.org>,
Paolo Abeni <pabeni@...hat.com>, Sebastian Andrzej Siewior
<bigeasy@...utronix.de>
Subject: Re: [RFC net-next 0/5] Suspend IRQs during preferred busy poll
On Tue, 13 Aug 2024 21:14:40 -0400 Martin Karsten wrote:
> > What about NIC interrupt coalescing. defer_hard_irqs_count was supposed
> > to be used with NICs which either don't have IRQ coalescing or have a
> > broken implementation. The timeout of 200usec should be perfectly within
> > range of what NICs can support.
> >
> > If the NIC IRQ coalescing works, instead of adding a new timeout value
> > we could add a new deferral control (replacing defer_hard_irqs_count)
> > which would always kick in after seeing prefer_busy_poll() but also
> > not kick in if the busy poll harvested 0 packets.
> Maybe I am missing something, but I believe this would have the same
> problem that we describe for gro-timeout + defer-irq. When busy poll
> does not harvest packets and the application thread is idle and goes to
> sleep, it would then take up to 200 us to get the next interrupt. This
> considerably increases tail latencies under low load.
>
> In order get low latencies under low load, the NIC timeout would have to
> be something like 20 us, but under high load the application thread will
> be busy for longer than 20 us and the interrupt (and softirq) will come
> too early and cause interference.
An FSM-like diagram would go a long way in clarifying things :)
> It is tempting to think of the second timeout as 0 and in fact re-enable
> interrupts right away. We have tried it, but it leads to a lot of
> interrupts and corresponding inefficiencies, since a system below
> capacity frequently switches between busy and idle. Using a small
> timeout (20 us) for modest deferral and batching when idle is a lot more
> efficient.
I see. I think we are on the same page. What I was suggesting is to use
the HW timer instead of the short timer. But I suspect the NIC you're
using isn't really good at clearing IRQs before unmasking. Meaning that
when you try to reactivate HW control there's already an IRQ pending
and it fires pointlessly. That matches my experience with mlx5.
If the NIC driver was to clear the IRQ state before running the NAPI
loop, we would have no pending IRQ by the time we unmask and activate
HW IRQs.
Sorry for the delay.
Powered by blists - more mailing lists