lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sat, 1 Jun 2019 02:27:28 +0300
From:   Ivan Khoronzhuk <ivan.khoronzhuk@...aro.org>
To:     Jesper Dangaard Brouer <brouer@...hat.com>
Cc:     grygorii.strashko@...com, hawk@...nel.org, davem@...emloft.net,
        ast@...nel.org, linux-kernel@...r.kernel.org,
        linux-omap@...r.kernel.org, xdp-newbies@...r.kernel.org,
        ilias.apalodimas@...aro.org, netdev@...r.kernel.org,
        daniel@...earbox.net, jakub.kicinski@...ronome.com,
        john.fastabend@...il.com
Subject: Re: [PATCH v2 net-next 7/7] net: ethernet: ti: cpsw: add XDP support

On Sat, Jun 01, 2019 at 12:37:36AM +0200, Jesper Dangaard Brouer wrote:
>On Fri, 31 May 2019 20:03:33 +0300
>Ivan Khoronzhuk <ivan.khoronzhuk@...aro.org> wrote:
>
>> Probably it's not good example for others how it should be used, not
>> a big problem to move it to separate pools.., even don't remember why
>> I decided to use shared pool, there was some more reasons... need
>> search in history.
>
>Using a shared pool is makes it a lot harder to solve the issue I'm
>currently working on.  That is handling/waiting for in-flight frames to
>complete, before removing the mem ID from the (r)hashtable lookup.  I
>have working code, that basically remove page_pool_destroy() from
>public API, and instead lets xdp_rxq_info_unreg() call it when
>in-flight count reach zero (and delay fully removing the mem ID).

Frankly, not see reason why it can block smth, it can be considered
like not shared pool. But Ok, anyway it can look more logical and can be
reused by another SoC. I will add it per channel not a problem,
at least for now no blockers. Adding pool per channel will create more
page_pool_destroy() calls, per each pool, that I can be dropped once
you decided to remove it form the API.

This API is called along with xdp_rxq_info_unreg(), and seems like not
a problem to just remove page_pool_destroy(), except one case that
worries me... cpsw has one interesting feature, share same h/w with 2
network devices like dual mac, basically it's 3 port switch, but used
as 2 separate interfaces. So that, both of them share same queues/channels/rings.
XDP rxq requires network device to be set in rxq info, wich is used in the
code as a pointer and is shared between xdp buffers, so can't be changed in
flight. That's why each network interface has it's own instances of rxq, but
page pools per each network device is common, so when I call
xdp_rxq_info_unreg() per net device it doesn't mean I want to delete
page pool....But seems I can avoid it calling xdp_rxq_info_unreg()
for both when delete page pools...

>
>-- 
>Best regards,
>  Jesper Dangaard Brouer
>  MSc.CS, Principal Kernel Engineer at Red Hat
>  LinkedIn: http://www.linkedin.com/in/brouer

-- 
Regards,
Ivan Khoronzhuk

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ