[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190618133012.GA2055@apalos>
Date: Tue, 18 Jun 2019 16:30:12 +0300
From: Ilias Apalodimas <ilias.apalodimas@...aro.org>
To: Ivan Khoronzhuk <ivan.khoronzhuk@...aro.org>
Cc: Tariq Toukan <tariqt@...lanox.com>,
Jesper Dangaard Brouer <brouer@...hat.com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
Toke Høiland-Jørgensen <toke@...e.dk>,
"toshiaki.makita1@...il.com" <toshiaki.makita1@...il.com>,
"grygorii.strashko@...com" <grygorii.strashko@...com>,
"mcroce@...hat.com" <mcroce@...hat.com>
Subject: Re: [PATCH net-next v1 08/11] xdp: tracking page_pool resources and
safe removal
Hi Ivan, Tariq,
> >>>+
[...]
> >>
> >>What would you recommend to do for the following situation:
> >>
> >>Same receive queue is shared between 2 network devices. The receive ring is
> >>filled by pages from page_pool, but you don't know the actual port (ndev)
> >>filling this ring, because a device is recognized only after packet is
> >>received.
> >>
> >>The API is so that xdp rxq is bind to network device, each frame has
> >>reference
> >>on it, so rxq ndev must be static. That means each netdev has it's own rxq
> >>instance even no need in it. Thus, after your changes, page must be
> >>returned to
> >>the pool it was taken from, or released from old pool and recycled in
> >>new one
> >>somehow.
> >>
> >>And that is inconvenience at least. It's hard to move pages between
> >>pools w/o
> >>performance penalty. No way to use common pool either, as unreg_rxq now
> >>drops
> >>the pool and 2 rxqa can't reference same pool.
> >>
> >
> >Within the single netdev, separate page_pool instances are anyway
> >created for different RX rings, working under different NAPI's.
>
> The circumstances are so that same RX ring is shared between 2
> netdevs... and netdev can be known only after descriptor/packet is
> received. Thus, while filling RX ring, there is no actual device,
> but when packet is received it has to be recycled to appropriate
> net device pool. Before this change there were no difference from
> which pool the page was allocated to fill RX ring, as there were no
> owner. After this change there is owner - netdev page pool.
>
> For cpsw the dma unmap is common for both netdevs and no difference
> who is freeing the page, but there is difference which pool it's
> freed to.
Since 2 netdevs are sharing one queue you'll need locking right?
(Assuming that the rx-irq per device can end up on a different core)
We discussed that ideally page pools should be alocated per hardware queue.
If you indeed need locking (and pay the performance penalty anyway) i wonder if
there's anything preventing you from keeping the same principle, i.e allocate a
pool per queue and handle the recycling to the proper ndev internally.
That way only the first device will be responsible of
allocating/recycling/maintaining the pool state.
>
> So that, while filling RX ring the page is taken from page pool of
> ndev1, but packet is received for ndev2, it has to be later
> returned/recycled to page pool of ndev1, but when xdp buffer is
> handed over to xdp prog the xdp_rxq_info has reference on ndev2 ...
>
> And no way to predict the final ndev before packet is received, so no
> way to choose appropriate page pool as now it becomes page owner.
>
> So, while RX ring filling, the page/dma recycling is needed but should
> be some way to identify page owner only after receiving packet.
>
> Roughly speaking, something like:
>
> pool->pages_state_hold_cnt++;
>
> outside of page allocation API, after packet is received.
> and free of the counter while allocation (w/o owing the page).
If handling it internally is not an option maybe we can sort something out for
special devices
Thanks
/Ilias
Powered by blists - more mailing lists