[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230731110008.26e8ce03@kernel.org>
Date: Mon, 31 Jul 2023 11:00:08 -0700
From: Jakub Kicinski <kuba@...nel.org>
To: Jesper Dangaard Brouer <hawk@...nel.org>
Cc: Michael Chan <michael.chan@...adcom.com>, davem@...emloft.net,
netdev@...r.kernel.org, edumazet@...gle.com, pabeni@...hat.com,
gospo@...adcom.com, bpf@...r.kernel.org, somnath.kotur@...adcom.com, Ilias
Apalodimas <ilias.apalodimas@...aro.org>
Subject: Re: [PATCH net-next 3/3] bnxt_en: Let the page pool manage the DMA
mapping
On Mon, 31 Jul 2023 19:47:08 +0200 Jesper Dangaard Brouer wrote:
> > This should be smaller than PAGE_SIZE only if you're wasting the rest
> > of the buffer, e.g. MTU is 3k so you know last 1k will never get used.
> > PAGE_SIZE is always a multiple of BNXT_RX_PAGE so you waste nothing.
>
> Remember pp.max_len is used for dma_sync_for_device.
> If driver is smart, it can set pp.max_len according to MTU, as the (DMA
> sync for) device knows hardware will not go beyond this.
> On Intel "dma_sync_for_device" is a no-op, so most drivers done
> optimized for this. I remember is had HUGE effects on ARM EspressoBin board.
Note that (AFAIU) there is no MTU here, these are pages for LRO/GRO,
they will be filled with TCP payload start to end. page_pool_put_page()
does nothing for non-last frag, so we'll only sync for the last
(BNXT_RX_PAGE-sized) frag released, and we need to sync the entire
host page.
Powered by blists - more mailing lists