[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAK8P3a0Rouw8jHHqGhKtMu-ks--bqpVYj_+u4-Pt9VoFOK7nMw@mail.gmail.com>
Date: Fri, 6 May 2022 10:45:29 +0200
From: Arnd Bergmann <arnd@...db.de>
To: Rafał Miłecki <zajec5@...il.com>
Cc: Andrew Lunn <andrew@...n.ch>, Arnd Bergmann <arnd@...db.de>,
Alexander Lobakin <alexandr.lobakin@...el.com>,
Network Development <netdev@...r.kernel.org>,
linux-arm-kernel <linux-arm-kernel@...ts.infradead.org>,
Russell King <linux@...linux.org.uk>,
Felix Fietkau <nbd@....name>,
"openwrt-devel@...ts.openwrt.org" <openwrt-devel@...ts.openwrt.org>,
Florian Fainelli <f.fainelli@...il.com>
Subject: Re: Optimizing kernel compilation / alignments for network performance
On Fri, May 6, 2022 at 9:44 AM Rafał Miłecki <zajec5@...il.com> wrote:
>
> On 5.05.2022 18:04, Andrew Lunn wrote:
> >> you'll see that most used functions are:
> >> v7_dma_inv_range
> >> __irqentry_text_end
> >> l2c210_inv_range
> >> v7_dma_clean_range
> >> bcma_host_soc_read32
> >> __netif_receive_skb_core
> >> arch_cpu_idle
> >> l2c210_clean_range
> >> fib_table_lookup
> >
> > There is a lot of cache management functions here.
Indeed, so optimizing the coherency management (see Felix' reply)
is likely to help most in making the driver faster, but that does not
explain why the alignment of the object code has such a big impact
on performance.
To investigate the alignment further, what I was actually looking for
is a comparison of the profile of the slow and fast case. Here I would
expect that the slow case spends more time in one of the functions
that don't deal with cache management (maybe fib_table_lookup or
__netif_receive_skb_core).
A few other thoughts:
- bcma_host_soc_read32() is a fundamentally slow operation, maybe
some of the calls can turned into a relaxed read, like the readback
in bgmac_chip_intrs_off() or the 'poll again' at the end bgmac_poll(),
though obviously not the one in bgmac_dma_rx_read().
It may be possible to even avoid some of the reads entirely, checking
for more data in bgmac_poll() may actually be counterproductive
depending on the workload.
- The higher-end networking SoCs are usually cache-coherent and
can avoid the cache management entirely. There is a slim chance
that this chip is designed that way and it just needs to be enabled
properly. Most low-end chips don't implement the coherent
interconnect though, and I suppose you have checked this already.
- bgmac_dma_rx_update_index() and bgmac_dma_tx_add() appear
to have an extraneous dma_wmb(), which should be implied by the
non-relaxed writel() in bgmac_write().
- accesses to the DMA descriptor don't show up in the profile here,
but look like they can get misoptimized by the compiler. I would
generally use READ_ONCE() and WRITE_ONCE() for these to
ensure that you don't end up with extra or out-of-order accesses.
This also makes it clearer to the reader that something special
happens here.
> > Might sound odd,
> > but have you tried disabling SMP? These cache functions need to
> > operate across all CPUs, and the communication between CPUs can slow
> > them down. If there is only one CPU, these cache functions get simpler
> > and faster.
> >
> > It just depends on your workload. If you have 1 CPU loaded to 100% and
> > the other 3 idle, you might see an improvement. If you actually need
> > more than one CPU, it will probably be worse.
>
> It seems to lower my NAT speed from ~362 Mb/s to 320 Mb/s but it feels
> more stable now (lower variations). Let me spend some time on more
> testing.
>
>
> FWIW during all my tests I was using:
> echo 2 > /sys/class/net/eth0/queues/rx-0/rps_cpus
> that is what I need to get similar speeds across iperf sessions
>
> With
> echo 0 > /sys/class/net/eth0/queues/rx-0/rps_cpus
> my NAT speeds were jumping between 4 speeds:
> 273 Mbps / 315 Mbps / 353 Mbps / 425 Mbps
> (every time I started iperf kernel jumped into one state and kept the
> same iperf speed until stopping it and starting another session)
>
> With
> echo 1 > /sys/class/net/eth0/queues/rx-0/rps_cpus
> my NAT speeds were jumping between 2 speeds:
> 284 Mbps / 408 Mbps
Can you try using 'numactl -C' to pin the iperf processes to
a particular CPU core? This may be related to the locality of
the user process relative to where the interrupts end up.
Arnd
Powered by blists - more mailing lists