lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 5 May 2022 17:33:39 +0000
From:   Robert Hancock <robert.hancock@...ian.com>
To:     "radheys@...inx.com" <radheys@...inx.com>,
        "kuba@...nel.org" <kuba@...nel.org>
CC:     "linux-arm-kernel@...ts.infradead.org" 
        <linux-arm-kernel@...ts.infradead.org>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        "davem@...emloft.net" <davem@...emloft.net>,
        "michals@...inx.com" <michals@...inx.com>,
        "pabeni@...hat.com" <pabeni@...hat.com>,
        "edumazet@...gle.com" <edumazet@...gle.com>,
        "harinik@...inx.com" <harinik@...inx.com>
Subject: Re: [PATCH net-next] net: axienet: Use NAPI for TX completion path

On Wed, 2022-05-04 at 19:20 -0700, Jakub Kicinski wrote:
> On Mon, 2 May 2022 19:30:51 +0000 Radhey Shyam Pandey wrote:
> > > This driver was using the TX IRQ handler to perform all TX completion
> > > tasks. Under heavy TX network load, this can cause significant irqs-off
> > > latencies (found to be in the hundreds of microseconds using ftrace).
> > > This can cause other issues, such as overrunning serial UART FIFOs when
> > > using high baud rates with limited UART FIFO sizes.
> > > 
> > > Switch to using the NAPI poll handler to perform the TX completion work
> > > to get this out of hard IRQ context and avoid the IRQ latency impact.  
> > 
> > Thanks for the patch. I assume for simulating heavy network load we
> > are using netperf/iperf. Do we have some details on the benchmark
> > before and after adding TX NAPI? I want to see the impact on
> > throughput.
> 
> Seems like a reasonable ask, let's get the patch reposted 
> with the numbers in the commit message.

Didn't mean to ignore that request, looks like I didn't get Radhey's email
directly, odd.

I did a test with iperf3 from the board (Xilinx MPSoC ZU9EG platform) connected
to a Linux PC via a switch at 1G link speed. With TX NAPI in place I saw about
942 Mbps for TX rate, with the previous code I saw 941 Mbps. RX speed was also
unchanged at 941 Mbps. So no real significant change either way. I can spin
another version of the patch that includes these numbers.

-- 
Robert Hancock
Senior Hardware Designer, Calian Advanced Technologies
www.calian.com

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ