[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <fd3be067-3423-a2ed-99c4-c77196dda480@redhat.com>
Date: Wed, 19 Apr 2023 17:12:01 +0200
From: Jesper Dangaard Brouer <jbrouer@...hat.com>
To: Jakub Kicinski <kuba@...nel.org>,
Christoph Hellwig <hch@...radead.org>
Cc: brouer@...hat.com, netdev@...r.kernel.org, edumazet@...gle.com,
pabeni@...hat.com, michael.chan@...adcom.com, hawk@...nel.org,
ilias.apalodimas@...aro.org, davem@...emloft.net
Subject: Re: [PATCH net-next] page_pool: add DMA_ATTR_WEAK_ORDERING on all
mappings
On 18/04/2023 13.23, Jesper Dangaard Brouer wrote:
> To Hellwig,
>
> On 17/04/2023 17.28, Jakub Kicinski wrote:
>> Commit c519fe9a4f0d ("bnxt: add dma mapping attributes") added
>> DMA_ATTR_WEAK_ORDERING to DMA attrs on bnxt. It has since spread
>> to a few more drivers (possibly as a copy'n'paste).
>>
>> DMA_ATTR_WEAK_ORDERING only seems to matter on Sparc and PowerPC/cell,
>> the rarity of these platforms is likely why we never bothered adding
>> the attribute in the page pool, even though it should be safe to add.
>>
>> To make the page pool migration in drivers which set this flag less
>> of a risk (of regressing the precious sparc database workloads or
>> whatever needed this) let's add DMA_ATTR_WEAK_ORDERING on all
>> page pool DMA mappings.
>>
>
> This sounds reasonable to me, but I don't know the DMA APIs well enough.
> Thus, I would like to hear if Hellwig thinks this is okay?
Acking this patch, as we can always revert if Hellwig have objections later.
Acked-by: Jesper Dangaard Brouer <brouer@...hat.com>
>> We could make this a driver opt-in but frankly I don't think it's
>> worth complicating the API. I can't think of a reason why device
>> accesses to packet memory would have to be ordered.
>>
>> Signed-off-by: Jakub Kicinski <kuba@...nel.org>
>> ---
>> CC: hawk@...nel.org
>> CC: ilias.apalodimas@...aro.org
>> ---
>> net/core/page_pool.c | 5 +++--
>> 1 file changed, 3 insertions(+), 2 deletions(-)
>>
>> diff --git a/net/core/page_pool.c b/net/core/page_pool.c
>> index 2f6bf422ed30..97f20f7ff4fc 100644
>> --- a/net/core/page_pool.c
>> +++ b/net/core/page_pool.c
>> @@ -316,7 +316,8 @@ static bool page_pool_dma_map(struct page_pool
>> *pool, struct page *page)
>> */
>> dma = dma_map_page_attrs(pool->p.dev, page, 0,
>> (PAGE_SIZE << pool->p.order),
>> - pool->p.dma_dir, DMA_ATTR_SKIP_CPU_SYNC);
>> + pool->p.dma_dir, DMA_ATTR_SKIP_CPU_SYNC |
>> + DMA_ATTR_WEAK_ORDERING);
>> if (dma_mapping_error(pool->p.dev, dma))
>> return false;
>> @@ -484,7 +485,7 @@ void page_pool_release_page(struct page_pool
>> *pool, struct page *page)
>> /* When page is unmapped, it cannot be returned to our pool */
>> dma_unmap_page_attrs(pool->p.dev, dma,
>> PAGE_SIZE << pool->p.order, pool->p.dma_dir,
>> - DMA_ATTR_SKIP_CPU_SYNC);
>> + DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_WEAK_ORDERING);
>> page_pool_set_dma_addr(page, 0);
>> skip_dma_unmap:
>> page_pool_clear_pp_info(page);
Powered by blists - more mailing lists