[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190619.112449.511488634807501138.davem@davemloft.net>
Date: Wed, 19 Jun 2019 11:24:49 -0400 (EDT)
From: David Miller <davem@...emloft.net>
To: brouer@...hat.com
Cc: netdev@...r.kernel.org, ilias.apalodimas@...aro.org, toke@...e.dk,
tariqt@...lanox.com, toshiaki.makita1@...il.com,
grygorii.strashko@...com, ivan.khoronzhuk@...aro.org,
mcroce@...hat.com
Subject: Re: [PATCH net-next v2 00/12] xdp: page_pool fixes and in-flight
accounting
From: Jesper Dangaard Brouer <brouer@...hat.com>
Date: Tue, 18 Jun 2019 15:05:07 +0200
> This patchset fix page_pool API and users, such that drivers can use it for
> DMA-mapping. A number of places exist, where the DMA-mapping would not get
> released/unmapped, all these are fixed. This occurs e.g. when an xdp_frame
> gets converted to an SKB. As network stack doesn't have any callback for XDP
> memory models.
>
> The patchset also address a shutdown race-condition. Today removing a XDP
> memory model, based on page_pool, is only delayed one RCU grace period. This
> isn't enough as redirected xdp_frames can still be in-flight on different
> queues (remote driver TX, cpumap or veth).
>
> We stress that when drivers use page_pool for DMA-mapping, then they MUST
> use one packet per page. This might change in the future, but more work lies
> ahead, before we can lift this restriction.
>
> This patchset change the page_pool API to be more strict, as in-flight page
> accounting is added.
Series applied, thanks Jesper.
Powered by blists - more mailing lists