lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 14 Jun 2019 13:25:28 +0000
From:   Maxim Mikityanskiy <maximmi@...lanox.com>
To:     Jakub Kicinski <jakub.kicinski@...ronome.com>
CC:     Jesper Dangaard Brouer <brouer@...hat.com>,
        Alexei Starovoitov <ast@...nel.org>,
        Daniel Borkmann <daniel@...earbox.net>,
        Björn Töpel <bjorn.topel@...el.com>,
        Magnus Karlsson <magnus.karlsson@...el.com>,
        "bpf@...r.kernel.org" <bpf@...r.kernel.org>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        "David S. Miller" <davem@...emloft.net>,
        Saeed Mahameed <saeedm@...lanox.com>,
        Jonathan Lemon <bsd@...com>,
        Tariq Toukan <tariqt@...lanox.com>,
        Martin KaFai Lau <kafai@...com>,
        Song Liu <songliubraving@...com>, Yonghong Song <yhs@...com>,
        Maciej Fijalkowski <maciejromanfijalkowski@...il.com>
Subject: Re: [PATCH bpf-next v4 05/17] xsk: Change the default frame size to
 4096 and allow controlling it

On 2019-06-13 20:29, Jakub Kicinski wrote:
> On Thu, 13 Jun 2019 14:01:39 +0000, Maxim Mikityanskiy wrote:
>> On 2019-06-12 23:10, Jakub Kicinski wrote:
>>> On Wed, 12 Jun 2019 15:56:43 +0000, Maxim Mikityanskiy wrote:
>>>> The typical XDP memory scheme is one packet per page. Change the AF_XDP
>>>> frame size in libbpf to 4096, which is the page size on x86, to allow
>>>> libbpf to be used with the drivers with the packet-per-page scheme.
>>>
>>> This is slightly surprising.  Why does the driver care about the bufsz?
>>
>> The classic XDP implementation supports only the packet-per-page scheme.
>> mlx5e implements this scheme, because it perfectly fits with xdp_return
>> and page pool APIs. AF_XDP relies on XDP, and even though AF_XDP doesn't
>> really allocate or release pages, it works on top of XDP, and XDP
>> implementation in mlx5e does allocate and release pages (in general
>> case) and works with the packet-per-page scheme.
> 
> Yes, okay, I get that.  But I still don't know what's the exact use you
> have for AF_XDP buffers being 4k..  Could you point us in the code to
> the place which relies on all buffers being 4k in any XDP scenario?

1. An XDP program is set on all queues, so to support non-4k AF_XDP 
frames, we would also need to support multiple-packet-per-page XDP for 
regular queues.

2. Page allocation in mlx5e perfectly fits page-sized XDP frames. Some 
examples in the code are:

2.1. mlx5e_free_rx_mpwqe calls a generic mlx5e_page_release to release 
the pages of a MPWQE (multi-packet work queue element), which is 
implemented as xsk_umem_fq_reuse for the case of XSK. We avoid extra 
overhead by using the fact that packet == page.

2.2. mlx5e_free_xdpsq_desc performs cleanup after XDP transmits. In case 
of XDP_TX, we can free/recycle the pages without having a refcount 
overhead, by using the fact that packet == page.

>>> You're not supposed to so page operations on UMEM pages, anyway.
>>> And the RX size filter should be configured according to MTU regardless
>>> of XDP state.
>>
>> Yes, of course, MTU is taken into account.
>>
>>> Can you explain?
>>>    
>>>> Add a command line option -f to xdpsock to allow to specify a custom
>>>> frame size.
>>>>
>>>> Signed-off-by: Maxim Mikityanskiy <maximmi@...lanox.com>
>>>> Reviewed-by: Tariq Toukan <tariqt@...lanox.com>
>>>> Acked-by: Saeed Mahameed <saeedm@...lanox.com>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ