lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <8066DA9D-7913-4BB9-9B44-0E2D1D07B8E1@gmail.com>
Date:   Mon, 11 Nov 2019 22:00:26 -0800
From:   "Jonathan Lemon" <jonathan.lemon@...il.com>
To:     "Jesper Dangaard Brouer" <brouer@...hat.com>
Cc:     netdev@...r.kernel.org, ilias.apalodimas@...aro.org,
        kernel-team@...com
Subject: Re: [RFC PATCH 1/1] page_pool: do not release pool until inflight ==
 0.


On 11 Nov 2019, at 3:47, Jesper Dangaard Brouer wrote:

> On Sun, 10 Nov 2019 22:20:38 -0800
> Jonathan Lemon <jonathan.lemon@...il.com> wrote:
>
>> The page pool keeps track of the number of pages in flight, and
>> it isn't safe to remove the pool until all pages are returned.
>>
>> Disallow removing the pool until all pages are back, so the pool
>> is always available for page producers.
>>
>> Make the page pool responsible for its own delayed destruction
>
> I like this part, making page_pool responsible for its own delayed
> destruction.  I originally also wanted to do this, but got stuck on
> mem.id getting removed prematurely from rhashtable.  You actually
> solved this, via introducing a disconnect callback, from page_pool into
> mem_allocator_disconnect(). I like it.
>
>> instead of relying on XDP, so the page pool can be used without
>> xdp.
>
> This is a misconception, the xdp_rxq_info_reg_mem_model API does not
> imply driver is using XDP.  Yes, I know the naming is sort of wrong,
> contains "xdp". Also the xdp_mem_info name.  Ilias and I have discussed
> to rename this several times.
>
> The longer term plan is/was to use this (xdp_)mem_info as generic
> return path for SKBs, creating a more flexible memory model for
> networking.  This patch is fine and in itself does not disrupt/change
> that, but your offlist changes does.  As your offlist changes does
> imply a performance gain, I will likely accept this (and then find
> another plan for more flexible memory model for networking).

Are you referring to the patch which encodes the page pool pointer
in the page, and then sends it directly to the pool on skb free
instead of performing a mem id lookup and indirection through the
memory model?

It could be done either way.  I'm not seeing any advantages of
the additional indirection, as the pool lifetime is guaranteed.

All that is needed is:
1) A way to differentiate this page as coming from the page pool.

   The current plan of setting a bit on the skb which indicates that
   the pages should be returned via the page pool is workable, but there
   will be some pages returned which came from the system page allocator,
   and these need to be filtered out.

   There must be some type of signature the page permits filtering and
   returning non-matching pages back to the page allocator.


2) Identifying up exactly which page pool the page belongs to.

   This could be done by just placing the pool pointer on the page,
   or putting the mem info there and indirecting through the lookup.

-- 
Jonathan

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ