lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 12 Jan 2023 16:09:00 -0800
From:   Jakub Kicinski <kuba@...nel.org>
To:     Jeremy Harris <jeharris@...hat.com>
Cc:     netdev@...r.kernel.org
Subject: Re: [RFC PATCH net-next 0/7] NIC driver Rx ring ECN

On Thu, 12 Jan 2023 14:06:50 +0000 Jeremy Harris wrote:
> On 11/01/2023 18:46, Jakub Kicinski wrote:
> > Do you have any reason to believe that it actually helps anything?  
> 
> I've not measured actual drop-rates, no.
> 
> > NAPI with typical budget of 64 is easily exhausted (you just need
> > two TSO frames arriving at once with 1500 MTU).  
> 
> I see typical systems with 300, not 64

Say more? I thought you were going by NAPI budget which should be 64
in bnx2x.

> - but it's a valid point.
> It's not the right measurement to try to control.
> Perhaps I should work harder to locate the ring size within
> the bnx2 and bnx2x drivers.

Perhaps the older devices give you some extra information here.
Normally on the Rx path you don't know how long the queue is,
you just check whether the next descriptor has been filled or not.
"Looking ahead" may be costly because you're accessing the same 
memory as the device.

> If I managed that (it being already the case for the xgene example)
> would your opinions change?

It may be cool if we can retrofit some second-order signal into 
the time-based machinery. The problem is that we don't actually 
have any time-based machinery upstream, yet :(
And designing interfaces for a decade-old HW seems shortsighted.

> > Host level congestion is better detected using time / latency signals.
> > Timestamp the packet at the NIC and compare the Rx time to current time
> > when processing by the driver.
> > 
> > Google search "Google Swift congestion control".  
> 
> Nice, but
> - requires we wait for timestamping-NICs

Grep for HWTSTAMP_FILTER_ALL, there's HW out there.

> - does not address Rx drops due to Rx ring-buffer overflow

It's a stronger signal than "continuous run of packets".
You can have a standing queue of 2 packets, and keep processing 
for ever. There's no congestion, or overload. You'd see that 
timestamps are recent.

I experimented last year with implementing CoDel on the input queues,
worked pretty well (scroll down ~half way):

https://developers.facebook.com/blog/post/2022/04/25/investigating-tcp-self-throttling-triggered-overload/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ