lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240514104739.2d06fb10@kernel.org>
Date: Tue, 14 May 2024 10:47:39 -0700
From: Jakub Kicinski <kuba@...nel.org>
To: Heiner Kallweit <hkallweit1@...il.com>
Cc: Alexander Lobakin <aleksander.lobakin@...el.com>, Eric Dumazet
 <edumazet@...gle.com>, Paolo Abeni <pabeni@...hat.com>, David Miller
 <davem@...emloft.net>, Realtek linux nic maintainers
 <nic_swsd@...ltek.com>, "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
 Ken Milmore <ken.milmore@...il.com>
Subject: Re: [PATCH net 2/2] r8169: disable interrupts also for
 GRO-scheduled NAPI

On Tue, 14 May 2024 19:09:21 +0200 Heiner Kallweit wrote:
> Thanks for the explanation. What is the benefit of acking interrupts
> at the beginning of NAPI poll, compared to acking them after
> napi_complete_done()?
> If budget is exceeded and we know we're polled again, why ack
> the interrupts in between?

That's a fair point, the main concern for acking after processing
is that we will miss an event. If we ack before processing we can
occasionally take an unnecessary IRQ, but we'll never let a packet
rot on the ring because it arrived between processing packets and
acking the IRQ.
But you know the driver better, maybe there's a clean way of avoiding
the missed IRQs (not sure it would be worth the complexity, tho TBH).

> I just tested with the defaults of gro_flush_timeout=20000 and
> napi_defer_hardirqs=1, and iperf3 --bidir.
> The difference is massive. When acking after napi_complete_done()
> I see only a few hundred interrupts. Acking at the beginning of
> NAPI poll it's few hundred thousand interrupts.

That's quite odd. Maybe because rtl_tx() doesn't contribute to work
done? Maybe it'd be better to set work done to min(budget, !!tx, rx) ?

Or maybe the disabling is not working somehow?

napi_defer_hardirqs=1 should make us reschedule NAPI if there was _any_
work done. Meaning we'd enable NAPI only after a completely empty NAPI
run. On an empty NAPI run it should not matter whether we acked before
or after checking for packets, or so I'd naively think.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ