lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKgT0Ufgn6O7GXZd8+YR53ciRdyBbWmw-qShy8vo1Es_Xn5KBA@mail.gmail.com>
Date: Tue, 16 Apr 2024 07:35:02 -0700
From: Alexander Duyck <alexander.duyck@...il.com>
To: Yunsheng Lin <linyunsheng@...wei.com>
Cc: Jakub Kicinski <kuba@...nel.org>, netdev@...r.kernel.org, 
	Alexander Duyck <alexanderduyck@...com>, davem@...emloft.net, pabeni@...hat.com
Subject: Re: [net-next PATCH 13/15] eth: fbnic: add basic Rx handling

On Tue, Apr 16, 2024 at 6:25 AM Yunsheng Lin <linyunsheng@...wei.com> wrote:
>
> On 2024/4/16 6:01, Jakub Kicinski wrote:
> > On Mon, 15 Apr 2024 11:55:37 -0700 Alexander Duyck wrote:
> >> It would take a few more changes to make it all work. Basically we
> >> would need to map the page into every descriptor entry since the worst
> >> case scenario would be that somehow we end up with things getting so
> >> tight that the page is only partially mapped and we are working
> >> through it as a subset of 4K slices with some at the beginning being
> >> unmapped from the descriptor ring while some are still waiting to be
> >> assigned to a descriptor and used. What I would probably have to look
> >> at doing is adding some sort of cache on the ring to hold onto it
> >> while we dole it out 4K at a time to the descriptors. Either that or
> >> enforce a hard 16 descriptor limit where we have to assign a full page
> >> with every allocation meaning we are at a higher risk for starving the
> >> device for memory.
> >
> > Hm, that would be more work, indeed, but potentially beneficial. I was
> > thinking of separating the page allocation and draining logic a bit
> > from the fragment handling logic.
> >
> > #define RXPAGE_IDX(idx)               ((idx) >> PAGE_SHIFT - 12)
> >
> > in fbnic_clean_bdq():
> >
> >       while (RXPAGE_IDX(head) != RXPAGE_IDX(hw_head))
> >
> > refer to rx_buf as:
> >
> >       struct fbnic_rx_buf *rx_buf = &ring->rx_buf[idx >> LOSE_BITS];
> >
> > Refill always works in batches of multiple of PAGE_SIZE / 4k.
>
> Are we expecting drivers wanting best possible performance doing the
> above duplicated trick?
>
> "grep -rn '_reuse_' drivers/net/ethernet/" seems to suggest that we
> already have similar trick to do the page spliting in a lot of drivers,
> I would rather we do not duplicate the above trick again.

Then why not focus on those drivers? You may have missed the whole
point but it isn't possible to test this device on a system with 64K
pages currently. There aren't any platforms we can drop the device
into that support that.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ