[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHS8izOySGEcXmMg3Gbb5DS-D9-B165gNpwf5a+ObJ7WigLmHg@mail.gmail.com>
Date: Thu, 29 Jun 2023 19:27:46 -0700
From: Mina Almasry <almasrymina@...gle.com>
To: Jakub Kicinski <kuba@...nel.org>
Cc: Jesper Dangaard Brouer <jbrouer@...hat.com>, brouer@...hat.com,
Alexander Duyck <alexander.duyck@...il.com>, Yunsheng Lin <linyunsheng@...wei.com>, davem@...emloft.net,
pabeni@...hat.com, netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
Lorenzo Bianconi <lorenzo@...nel.org>, Yisen Zhuang <yisen.zhuang@...wei.com>,
Salil Mehta <salil.mehta@...wei.com>, Eric Dumazet <edumazet@...gle.com>,
Sunil Goutham <sgoutham@...vell.com>, Geetha sowjanya <gakula@...vell.com>,
Subbaraya Sundeep <sbhatta@...vell.com>, hariprasad <hkelam@...vell.com>,
Saeed Mahameed <saeedm@...dia.com>, Leon Romanovsky <leon@...nel.org>, Felix Fietkau <nbd@....name>,
Ryder Lee <ryder.lee@...iatek.com>, Shayne Chen <shayne.chen@...iatek.com>,
Sean Wang <sean.wang@...iatek.com>, Kalle Valo <kvalo@...nel.org>,
Matthias Brugger <matthias.bgg@...il.com>,
AngeloGioacchino Del Regno <angelogioacchino.delregno@...labora.com>,
Jesper Dangaard Brouer <hawk@...nel.org>, Ilias Apalodimas <ilias.apalodimas@...aro.org>,
linux-rdma@...r.kernel.org, linux-wireless@...r.kernel.org,
linux-arm-kernel@...ts.infradead.org, linux-mediatek@...ts.infradead.org,
Jonathan Lemon <jonathan.lemon@...il.com>
Subject: Re: Memory providers multiplexing (Was: [PATCH net-next v4 4/5]
page_pool: remove PP_FLAG_PAGE_FRAG flag)
On Mon, Jun 19, 2023 at 11:07 AM Jakub Kicinski <kuba@...nel.org> wrote:
>
> On Fri, 16 Jun 2023 22:42:35 +0200 Jesper Dangaard Brouer wrote:
> > > Former is better for huge pages, latter is better for IO mem
> > > (peer-to-peer DMA). I wonder if you have different use case which
> > > requires a different model :(
> >
> > I want for the network stack SKBs (and XDP) to support different memory
> > types for the "head" frame and "data-frags". Eric have described this
> > idea before, that hardware will do header-split, and we/he can get TCP
> > data part is another page/frag, making it faster for TCP-streams, but
> > this can be used for much more.
> >
> > My proposed use-cases involves more that TCP. We can easily imagine
> > NVMe protocol header-split, and the data-frag could be a mem_type that
> > actually belongs to the harddisk (maybe CPU cannot even read this). The
> > same scenario goes for GPU memory, which is for the AI use-case. IIRC
> > then Jonathan have previously send patches for the GPU use-case.
> >
> > I really hope we can work in this direction together,
>
> Perfect, that's also the use case I had in mind. The huge page thing
> was just a quick thing to implement as a PoC (although useful in its
> own right, one day I'll find the time to finish it, sigh).
>
> That said I couldn't convince myself that for a peer-to-peer setup we
> have enough space in struct page to store all the information we need.
> Or that we'd get a struct page at all, and not just a region of memory
> with no struct page * allocated :S
>
> That'd require serious surgery on the page pool's fast paths to work
> around.
>
> I haven't dug into the details, tho. If you think we can use page pool
> as a frontend for iouring and/or p2p memory that'd be awesome!
>
Hello Jakub, I'm looking into device memory (peer-to-peer) networking
actually, and I plan to pursue using the page pool as a front end.
Quick description of what I have so far:
current implementation uses device memory with struct pages; I am
putting all those pages in a gen_pool, and we have written an
allocator that allocates pages from the gen_pool. In the driver, we
use this allocator instead of alloc_page() (the driver in question is
gve which currently doesn't use the page pool). When the driver is
done with the p2p page, it simply decrements the refcount on it and
the page is freed back to the gen_pool.
Test results are good; our best results we're able to achieve ~96%
line rate with incoming packets going straight to device memory and
without bouncing the memory to a host buffer (albeit these results are
on our slighly older, production LTS, I need to work on getting
results from linus/master).
I've discussed your page pool frontend idea with our gve owners and
the idea is attractive. In particular it would be good not to insert
much custom code into the driver to support device memory pages or
other page types. I plan on trying to change my approach to match the
page pool provider you have in progress here:
https://github.com/kuba-moo/linux/tree/pp-providers
In particular the main challenge right now seems to be that my device
memory pages are ZONE_DEVICE pages, which can't be inserted to the
page pool as-is due to the union in struct page between the page pool
entries and the ZONE_DEVICE entries. I have some ideas on how to work
around that I'm looking into.
It sounds like you don't have the time at the moment to work on the
page pool provider idea; I plan to try and get my code working with
that model and propose it if it's successful. Let me know if you have
concerns here.
> The workaround solution I had in mind would be to create a narrower API
> for just data pages. Since we'd need to sprinkle ifs anyway, pull them
> up close to the call site. Allowing to switch page pool for a
> completely different implementation, like the one Jonathan coded up for
> iouring. Basically
>
> $name_alloc_page(queue)
> {
> if (queue->pp)
> return page_pool_dev_alloc_pages(queue->pp);
> else if (queue->iouring..)
> ...
> }
>
--
Thanks,
Mina
Powered by blists - more mailing lists