[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGngYiV=bzc72dpA6TJ7Bo2wcTihmB83HCU63pK4Z_jZ2frKww@mail.gmail.com>
Date: Wed, 16 Dec 2020 19:57:28 -0500
From: Sven Van Asbroeck <thesven73@...il.com>
To: Andrew Lunn <andrew@...n.ch>
Cc: Florian Fainelli <f.fainelli@...il.com>,
Jakub Kicinski <kuba@...nel.org>,
Bryan Whitehead <bryan.whitehead@...rochip.com>,
Microchip Linux Driver Support <UNGLinuxDriver@...rochip.com>,
David S Miller <davem@...emloft.net>,
netdev <netdev@...r.kernel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH net v1 2/2] lan743x: boost performance: limit PCIe
bandwidth requirement
Hi Andrew,
On Wed, Dec 9, 2020 at 9:10 AM Andrew Lunn <andrew@...n.ch> wrote:
>
> 9K is not a nice number, since for each allocation it probably has to
> find 4 contiguous pages. See what the performance difference is with
> 2K, 4K and 8K. If there is a big difference, you might want to special
> case when the MTU is set for jumbo packets, or check if the hardware
> can do scatter/gather.
>
> You also need to be careful with caches and speculation. As you have
> seen, bad things can happen. And it can be a lot more subtle. If some
> code is accessing the page before the buffer and gets towards the end
> of the page, the CPU might speculatively bring in the next page, i.e
> the start of the buffer. If that happens before the DMA operation, and
> you don't invalidate the cache correctly, you get hard to find
> corruption.
Thank you for the guidance. When I keep the 9K buffers, and sync
only the buffer space that is being used (mtu when mapping, received
packet size when unmapping), then there is no more corruption, and
performance improves. But setting the buffer size to the mtu size
still provides much better performance. I do not understand why
(yet).
It seems that caching and dma behaviour/performance on arm32
(armv7) is very different compared to x86.
Powered by blists - more mailing lists