lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKgT0UcNPBE17T7g4y0XSkEZN89C69TfjWurAap5Yx_8XWLk1w@mail.gmail.com>
Date: Mon, 15 Apr 2024 16:57:54 -0700
From: Alexander Duyck <alexander.duyck@...il.com>
To: Jakub Kicinski <kuba@...nel.org>
Cc: Yunsheng Lin <linyunsheng@...wei.com>, netdev@...r.kernel.org, 
	Alexander Duyck <alexanderduyck@...com>, davem@...emloft.net, pabeni@...hat.com
Subject: Re: [net-next PATCH 13/15] eth: fbnic: add basic Rx handling

On Mon, Apr 15, 2024 at 3:01 PM Jakub Kicinski <kuba@...nel.org> wrote:
>
> On Mon, 15 Apr 2024 11:55:37 -0700 Alexander Duyck wrote:
> > It would take a few more changes to make it all work. Basically we
> > would need to map the page into every descriptor entry since the worst
> > case scenario would be that somehow we end up with things getting so
> > tight that the page is only partially mapped and we are working
> > through it as a subset of 4K slices with some at the beginning being
> > unmapped from the descriptor ring while some are still waiting to be
> > assigned to a descriptor and used. What I would probably have to look
> > at doing is adding some sort of cache on the ring to hold onto it
> > while we dole it out 4K at a time to the descriptors. Either that or
> > enforce a hard 16 descriptor limit where we have to assign a full page
> > with every allocation meaning we are at a higher risk for starving the
> > device for memory.
>
> Hm, that would be more work, indeed, but potentially beneficial. I was
> thinking of separating the page allocation and draining logic a bit
> from the fragment handling logic.
>
> #define RXPAGE_IDX(idx)         ((idx) >> PAGE_SHIFT - 12)
>
> in fbnic_clean_bdq():
>
>         while (RXPAGE_IDX(head) != RXPAGE_IDX(hw_head))
>
> refer to rx_buf as:
>
>         struct fbnic_rx_buf *rx_buf = &ring->rx_buf[idx >> LOSE_BITS];
>
> Refill always works in batches of multiple of PAGE_SIZE / 4k.
>
> > The bigger issue would be how could we test it? This is an OCP NIC and
> > as far as I am aware we don't have any systems available that would
> > support a 64K page. I suppose I could rebuild the QEMU for an
> > architecture that supports 64K pages and test it. It would just be
> > painful to have to set up a virtual system to test code that would
> > literally never be used again. I am not sure QEMU can generate enough
> > stress to really test the page allocator and make sure all corner
> > cases are covered.
>
> The testing may be tricky. We could possibly test with hacking up the
> driver to use compound pages (say always allocate 16k) and making sure
> we don't refer to PAGE_SIZE directly in the test.
>
> BTW I have a spreadsheet of "promises", I'd be fine if we set a
> deadline for FBNIC to gain support for PAGE_SIZE != 4k and Kconfig
> to x86-only for now..

Why set a deadline? It doesn't make sense to add as a feature for now.

I would be fine with limiting it to x86-only and then stating that if
we need to change it to add support for an architecture that does
support !4K page size then we can cross that bridge when we get there
as it would be much more likely that we would have access to a
platform to test it on rather than adding overhead to the code to
support a setup that this device may never see in its lifetime.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ