lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2ff79a56-bf32-731b-a6ab-94654b8a3b31@redhat.com>
Date:   Thu, 12 Jan 2023 14:06:50 +0000
From:   Jeremy Harris <jeharris@...hat.com>
To:     Jakub Kicinski <kuba@...nel.org>
Cc:     netdev@...r.kernel.org
Subject: Re: [RFC PATCH net-next 0/7] NIC driver Rx ring ECN

On 11/01/2023 18:46, Jakub Kicinski wrote:
> Do you have any reason to believe that it actually helps anything?

I've not measured actual drop-rates, no.

> NAPI with typical budget of 64 is easily exhausted (you just need
> two TSO frames arriving at once with 1500 MTU).

I see typical systems with 300, not 64 - but it's a valid point.
It's not the right measurement to try to control.
Perhaps I should work harder to locate the ring size within
the bnx2 and bnx2x drivers.

If I managed that (it being already the case for the xgene example)
would your opinions change?

> Host level congestion is better detected using time / latency signals.
> Timestamp the packet at the NIC and compare the Rx time to current time
> when processing by the driver.
> 
> Google search "Google Swift congestion control".

Nice, but
- requires we wait for timestamping-NICs
- does not address Rx drops due to Rx ring-buffer overflow

-- 
Cheers,
   Jeremy

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ