lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6b44fcd0-9210-4b2b-780a-09e24bba508a@redhat.com>
Date:   Mon, 24 Apr 2023 15:04:31 +0200
From:   Jesper Dangaard Brouer <jbrouer@...hat.com>
To:     Yunsheng Lin <linyunsheng@...wei.com>,
        Lorenzo Bianconi <lorenzo.bianconi@...hat.com>
Cc:     brouer@...hat.com, Lorenzo Bianconi <lorenzo@...nel.org>,
        netdev@...r.kernel.org, bpf@...r.kernel.org, davem@...emloft.net,
        edumazet@...gle.com, kuba@...nel.org, pabeni@...hat.com,
        hawk@...nel.org, john.fastabend@...il.com, ast@...nel.org,
        daniel@...earbox.net
Subject: Re: [PATCH v2 net-next 1/2] net: veth: add page_pool for page
 recycling


On 24/04/2023 13.58, Yunsheng Lin wrote:
> On 2023/4/24 17:17, Lorenzo Bianconi wrote:
>>> On 2023/4/23 22:20, Lorenzo Bianconi wrote:
>>>>> On 2023/4/23 2:54, Lorenzo Bianconi wrote:
>>>>>>   struct veth_priv {
>>>>>> @@ -727,17 +729,20 @@ static int veth_convert_skb_to_xdp_buff(struct veth_rq *rq,
>>>>>>   			goto drop;
>>>>>>   
>>>>>>   		/* Allocate skb head */
>>>>>> -		page = alloc_page(GFP_ATOMIC | __GFP_NOWARN);
>>>>>> +		page = page_pool_dev_alloc_pages(rq->page_pool);
>>>>>>   		if (!page)
>>>>>>   			goto drop;
>>>>>>   
>>>>>>   		nskb = build_skb(page_address(page), PAGE_SIZE);
>>>>>
>>>>> If page pool is used with PP_FLAG_PAGE_FRAG, maybe there is some additional
>>>>> improvement for the MTU 1500B case, it seem a 4K page is able to hold two skb.
>>>>> And we can reduce the memory usage too, which is a significant saving if page
>>>>> size is 64K.
>>>>
>>>> please correct if I am wrong but I think the 1500B MTU case does not fit in the
>>>> half-page buffer size since we need to take into account VETH_XDP_HEADROOM.
>>>> In particular:
>>>>
>>>> - VETH_BUF_SIZE = 2048
>>>> - VETH_XDP_HEADROOM = 256 + 2 = 258
>>>
>>> On some arch the NET_IP_ALIGN is zero.
>>>
>>> I suppose XDP_PACKET_HEADROOM are for xdp_frame and data_meta, it seems
>>> xdp_frame is only 40 bytes for 64 bit arch and max size of metalen is 32
>>> as xdp_metalen_invalid() suggest, is there any other reason why we need
>>> 256 bytes here?
>>
>> XDP_PACKET_HEADROOM must be greater than (40 + 32)B because you may want push
>> new data at the beginning of the xdp_buffer/xdp_frame running
>> bpf_xdp_adjust_head() helper.
>> I think 256B has been selected for XDP_PACKET_HEADROOM since it is 4 cachelines
>> (but I can be wrong).
>> There was a discussion in the past to reduce XDP_PACKET_HEADROOM to 192B but
>> this is not merged yet and it is not related to this series. We can address
>> your comments in a follow-up patch when XDP_PACKET_HEADROOM series is merged.
> 
> It worth mentioning that the performance gain in this patch is at the cost of
> more memory usage, at most of VETH_RING_SIZE(256) + PP_ALLOC_CACHE_SIZE(128)
> pages is used.
> 

The general scheme with XDP is trading memory for speed up.

> IMHO, it seems better to limit the memory usage as much as possible, or provide a
> way to disable/enable page pool for user.
> 

Well, that sort of it exists right... If you disable XDP, or actually
NAPI (looking at patches), it will also disable the page pool.

I want to high-light that Lorenzo is "just" replacing allocating a full
page via alloc_page() to a faster api, that happens to cache some of
these pages.
In that sense, I think this patch makes sense ... isolated seen.

My concern beyond this patch is that netif_receive_generic_xdp() and
veth_convert_skb_to_xdp_buff() are both dealing with SKB-to-XDP
conversion, but they are diverting in how they do this.
(Is the challenge that veth will also see "TX" SKBs?)

Kind changing the direction, but I'm thinking why the beep are we
allocating+copying the entire contents of the SKB.
There must be a better way? (especially after XDP got frags support).

--Jesper

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ