[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20d9b588-f813-4bad-a4da-e058508322df@intel.com>
Date: Wed, 28 May 2025 16:54:39 +0200
From: Alexander Lobakin <aleksander.lobakin@...el.com>
To: Jakub Kicinski <kuba@...nel.org>
CC: Tony Nguyen <anthony.l.nguyen@...el.com>, <davem@...emloft.net>,
<pabeni@...hat.com>, <edumazet@...gle.com>, <andrew+netdev@...n.ch>,
<netdev@...r.kernel.org>, <maciej.fijalkowski@...el.com>,
<magnus.karlsson@...el.com>, <michal.kubiak@...el.com>,
<przemyslaw.kitszel@...el.com>, <ast@...nel.org>, <daniel@...earbox.net>,
<hawk@...nel.org>, <john.fastabend@...il.com>, <horms@...nel.org>,
<bpf@...r.kernel.org>, Mina Almasry <almasrymina@...gle.com>
Subject: Re: [PATCH net-next 01/16] libeth: convert to netmem
From: Jakub Kicinski <kuba@...nel.org>
Date: Tue, 27 May 2025 18:57:49 -0700
> On Tue, 20 May 2025 13:59:02 -0700 Tony Nguyen wrote:
>> @@ -3277,16 +3277,20 @@ static u32 idpf_rx_hsplit_wa(const struct libeth_fqe *hdr,
>> struct libeth_fqe *buf, u32 data_len)
>> {
>> u32 copy = data_len <= L1_CACHE_BYTES ? data_len : ETH_HLEN;
>> + struct page *hdr_page, *buf_page;
>> const void *src;
>> void *dst;
>>
>> - if (!libeth_rx_sync_for_cpu(buf, copy))
>> + if (unlikely(netmem_is_net_iov(buf->netmem)) ||
>> + !libeth_rx_sync_for_cpu(buf, copy))
>> return 0;
>
> So what happens to the packet that landed in a netmem buffer in case
> when HDS failed? I don't see the handling.
This should not happen in general, as in order to use TCP devmem, you
need to isolate the traffic, and idpf parses all TCP frames correctly.
If this condition is true, then napi_build_skb() will be called with the
devmem buffer passed as head. Should I drop such packets, so that this
would never happen?
>
>> - dst = page_address(hdr->page) + hdr->offset + hdr->page->pp->p.offset;
>> - src = page_address(buf->page) + buf->offset + buf->page->pp->p.offset;
>> - memcpy(dst, src, LARGEST_ALIGN(copy));
>> + hdr_page = __netmem_to_page(hdr->netmem);
>> + buf_page = __netmem_to_page(buf->netmem);
>> + dst = page_address(hdr_page) + hdr->offset + hdr_page->pp->p.offset;
>> + src = page_address(buf_page) + buf->offset + buf_page->pp->p.offset;
>>
>> + memcpy(dst, src, LARGEST_ALIGN(copy));
>> buf->offset += copy;
>>
>> return copy;
>> @@ -3302,11 +3306,12 @@ static u32 idpf_rx_hsplit_wa(const struct libeth_fqe *hdr,
>> */
>> struct sk_buff *idpf_rx_build_skb(const struct libeth_fqe *buf, u32 size)
>> {
>> - u32 hr = buf->page->pp->p.offset;
>> + struct page *buf_page = __netmem_to_page(buf->netmem);
>> + u32 hr = buf_page->pp->p.offset;
>> struct sk_buff *skb;
>> void *va;
>>
>> - va = page_address(buf->page) + buf->offset;
>> + va = page_address(buf_page) + buf->offset;
>> prefetch(va + hr);
>
> If you don't want to have to validate the low bit during netmem -> page
> conversions - you need to clearly maintain the separation between
> the two in the driver. These __netmem_to_page() calls are too much of
> a liability.
This is only for the header buffers. They always are in the host memory.
The separation is done in the idpf_buf_queue structure. There are 2 PPs:
hdr_pp and pp, and the first one is always host.
We never allow to build an skb with a devmem head, right? So why check
it here...
Thanks,
Olek
Powered by blists - more mailing lists