lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Mon, 15 Apr 2024 15:01:36 -0700
From: Jakub Kicinski <kuba@...nel.org>
To: Alexander Duyck <alexander.duyck@...il.com>
Cc: Yunsheng Lin <linyunsheng@...wei.com>, netdev@...r.kernel.org, Alexander
 Duyck <alexanderduyck@...com>, davem@...emloft.net, pabeni@...hat.com
Subject: Re: [net-next PATCH 13/15] eth: fbnic: add basic Rx handling

On Mon, 15 Apr 2024 11:55:37 -0700 Alexander Duyck wrote:
> It would take a few more changes to make it all work. Basically we
> would need to map the page into every descriptor entry since the worst
> case scenario would be that somehow we end up with things getting so
> tight that the page is only partially mapped and we are working
> through it as a subset of 4K slices with some at the beginning being
> unmapped from the descriptor ring while some are still waiting to be
> assigned to a descriptor and used. What I would probably have to look
> at doing is adding some sort of cache on the ring to hold onto it
> while we dole it out 4K at a time to the descriptors. Either that or
> enforce a hard 16 descriptor limit where we have to assign a full page
> with every allocation meaning we are at a higher risk for starving the
> device for memory.

Hm, that would be more work, indeed, but potentially beneficial. I was
thinking of separating the page allocation and draining logic a bit
from the fragment handling logic.

#define RXPAGE_IDX(idx)		((idx) >> PAGE_SHIFT - 12)

in fbnic_clean_bdq():

	while (RXPAGE_IDX(head) != RXPAGE_IDX(hw_head))

refer to rx_buf as:

	struct fbnic_rx_buf *rx_buf = &ring->rx_buf[idx >> LOSE_BITS];

Refill always works in batches of multiple of PAGE_SIZE / 4k.

> The bigger issue would be how could we test it? This is an OCP NIC and
> as far as I am aware we don't have any systems available that would
> support a 64K page. I suppose I could rebuild the QEMU for an
> architecture that supports 64K pages and test it. It would just be
> painful to have to set up a virtual system to test code that would
> literally never be used again. I am not sure QEMU can generate enough
> stress to really test the page allocator and make sure all corner
> cases are covered.

The testing may be tricky. We could possibly test with hacking up the
driver to use compound pages (say always allocate 16k) and making sure
we don't refer to PAGE_SIZE directly in the test.

BTW I have a spreadsheet of "promises", I'd be fine if we set a
deadline for FBNIC to gain support for PAGE_SIZE != 4k and Kconfig 
to x86-only for now..

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ