lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <ZDYo6gwe0ukT3ozm@P9FQF9L96D.corp.robot.car> Date: Tue, 11 Apr 2023 20:43:38 -0700 From: Roman Gushchin <roman.gushchin@...ux.dev> To: Jakub Kicinski <kuba@...nel.org> Cc: Ingo Rohloff <ingo.rohloff@...terbach.com>, Lars-Peter Clausen <lars@...afoo.de>, robert.hancock@...ian.com, Nicolas.Ferre@...rochip.com, claudiu.beznea@...rochip.com, davem@...emloft.net, netdev@...r.kernel.org, tomas.melin@...sala.com Subject: Re: [PATCH 0/1] Alternative, restart tx after tx used bit read On Tue, Apr 11, 2023 at 07:07:15PM -0700, Jakub Kicinski wrote: > On Fri, 7 Apr 2023 23:33:48 +0200 Ingo Rohloff wrote: > > Analysis: > > Commit 404cd086f29e867f ("net: macb: Allocate valid memory for TX and RX BD > > prefetch") mentions: > > > > GEM version in ZynqMP and most versions greater than r1p07 supports > > TX and RX BD prefetch. The number of BDs that can be prefetched is a > > HW configurable parameter. For ZynqMP, this parameter is 4. > > > > I think what happens is this: > > Example Scenario (SW == linux kernel, HW == cadence ethernet IP). > > 1) SW has written TX descriptors 0..7 > > 2) HW is currently transmitting TX descriptor 6. > > HW has already prefetched TX descriptors 6,7,8,9. > > 3) SW writes TX descriptor 8 (clearing TX_USED) > > 4) SW writes the TSTART bit. > > HW ignores this, because it is still transmitting. > > 5) HW transmits TX descriptor 7. > > 6) HW reaches descriptor 8; because this descriptor > > has already been prefetched, HW sees a non-active > > descriptor (TX_USED set) and stops transmitting. > > This sounds broken, any idea if this is how the IP is supposed to work > or it may be an integration issue in Zynq? The other side of this > question is how expensive the workaround is - a spin lock and two extra > register reads on completion seems like a lot. > > Roman, Lars, have you seen Tx stalls on your macb setups? Not yet, but also we have a custom patch that reduces the number of tx queues to 1, which "fixed" some lockup we've seen in the past.
Powered by blists - more mailing lists