[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3682d8ea-eecd-44b1-a446-473223e689ba@gmail.com>
Date: Wed, 6 Nov 2024 00:50:04 +0000
From: Pavel Begunkov <asml.silence@...il.com>
To: Mina Almasry <almasrymina@...gle.com>
Cc: David Wei <dw@...idwei.uk>, io-uring@...r.kernel.org,
netdev@...r.kernel.org, Jens Axboe <axboe@...nel.dk>,
Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>,
"David S. Miller" <davem@...emloft.net>, Eric Dumazet <edumazet@...gle.com>,
Jesper Dangaard Brouer <hawk@...nel.org>, David Ahern <dsahern@...nel.org>,
Stanislav Fomichev <stfomichev@...il.com>, Joe Damato <jdamato@...tly.com>,
Pedro Tammela <pctammela@...atatu.com>
Subject: Re: [PATCH v7 06/15] net: page pool: add helper creating area from
pages
On 11/5/24 23:34, Mina Almasry wrote:
> On Fri, Nov 1, 2024 at 11:16 AM Pavel Begunkov <asml.silence@...il.com> wrote:
...
>> Even though Jakub didn't comment on this patch, but he definitely
>> wasn't fond of giving all those headers to non net/ users. I guess
>> I can't please everyone. One middle option is to make the page
>> pool helper more granular, i.e. taking care of one netmem at
>> a time, and moving the loop to io_uring, but I don't think it
^^^
>> changes anything.
>>
>
> My issue is that these:
>
> +int page_pool_mp_init_paged_area(struct page_pool *pool,
> + struct net_iov_area *area,
> + struct page **pages);
> +void page_pool_mp_release_area(struct page_pool *pool,
>
> Are masquerading as generic functions to be used by many mp but
> they're really io_uring specific. dmabuf and the hugepage provider
> would not use them AFAICT. Would have liked not to have code specific
> to one mp in page_pool.c, and I was asked to move the dmabuf specific
> functions to another file too IIRC.
>
> These helpers depend on:
>
> page_pool_set_pp_info: in netmem_priv.h
> net_iov_to_netmem(niov): in netmem.h
> page_pool_dma_map_page: can be put in page_pool/helpers.h?
> page_pool_release_page_dma(pool, netmem): can be put in page_pool/helpers.h?
>
> I would prefer moving page_pool_set_pp_info (and the stuff it calls
> into) to netmem.h and removing io_uring mp specific code from
> page_pool.c.
I just already described it above but somewhat less detailed. FWIW,
page_pool_dma_map_page() + page_pool_set_pp_info() has to be combined
into a single function as separately they don't make a good public
API. I can change it this way, but ultimately there is no much
difference, it needs to export functions that io_uring can use.
Does it make a better API? I doubt it, and it can be changed later
anyway. Let's not block the patchset for minor bikeshedding.
...
>>> Instead it uses net_iov-backed-netmem and there is an out of band page
>>> to be managed.
>>>
>>> I think it would make sense to use paged-backed-netmem for your use
>>> case, or at least I don't see why it wouldn't work. Memory providers
>>
>> It's a user page, we can't make assumptions about it, we can't
>> reuse space in struct page like for pp refcounting (unlike when
>> it's allocated by the kernel), we can't use the normal page
>> refcounting.
>>
>
> You actually can't reuse space in struct net_iov either for your own
> refcounting, because niov->pp_ref_count is a mirror to
> page->pp_ref_count and strictly follows the patterns of that. But
> that's the issue to be discussed on the other patch...
Right, which is why it's used for usual buffer refcounting that
page pool / net stack recognises. It's also not a problem to extend
it for performance reasons, those are internals and can be changed
and there are ways to change it.
>> If that's the direction people prefer, we can roll back to v1 from
>> a couple years ago, fill skbs fill user pages, attach ubuf_info to
>> every skb, and whack-a-mole'ing all places where the page could be
>> put down or such, pretty similarly what net_iov does. Honestly, I
>> thought that reusing common infra so that the net stack doesn't
>> need a different path per interface was a good idea.
>>
>
> The common infra does support page-backed-netmem actually.
Please read again why user pages can't be passed directly,
if you take a look at struct page and where pp_ref_count and
other bits are, you'll find a huge union.
>>> were designed to handle the hugepage usecase where the memory
>>> allocated by the provider is pages. Is there a reason that doesn't
>>> work for you as well?
>>>
>>> If you really need to use net_iov-backed-netmem, can we put this
>>> weirdness in the provider? I don't know that we want a generic-looking
>>> dma_map function which is a bit confusing to take a netmem and a page.>
>> ...
>>>> +
>>>> +static void page_pool_release_page_dma(struct page_pool *pool,
>>>> + netmem_ref netmem)
>>>> +{
>>>> + __page_pool_release_page_dma(pool, netmem);
>>>> +}
>>>> +
>>>
>>> Is this wrapper necessary? Do you wanna rename the original function
>>> to remove the __ instead of a adding a wrapper?
>>
>> I only added it here to cast away __always_inline since it's used in
>> a slow / setup path. It shouldn't change the binary, but I'm not a huge
>> fan of leaving the hint for the code where it's not needed.
>>
>
> Right, it probably makes sense to make the modifications you want on
> the original function rather than create a no-op wrapper to remove the
> __always_inline.
Removing __always_inline from a function deemed hot enough to need
the attribute is not a good idea, not in an unrelated feature
patchset. FWIW, it's not a new practice either. If maintainers will
insist on removing it I'll do.
--
Pavel Begunkov
Powered by blists - more mailing lists