[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CACKFLimJO7Wt90O_F3Nk375rABpAQvKBZhNmBkNzzehYHbk_jA@mail.gmail.com>
Date: Mon, 31 Jul 2023 13:20:04 -0700
From: Michael Chan <michael.chan@...adcom.com>
To: Jakub Kicinski <kuba@...nel.org>
Cc: Jesper Dangaard Brouer <hawk@...nel.org>, davem@...emloft.net, netdev@...r.kernel.org,
edumazet@...gle.com, pabeni@...hat.com, gospo@...adcom.com,
bpf@...r.kernel.org, somnath.kotur@...adcom.com,
Ilias Apalodimas <ilias.apalodimas@...aro.org>
Subject: Re: [PATCH net-next 3/3] bnxt_en: Let the page pool manage the DMA mapping
On Mon, Jul 31, 2023 at 11:44 AM Jakub Kicinski <kuba@...nel.org> wrote:
> Maybe I'm misunderstanding. Let me tell you how I think this works and
> perhaps we should update the docs based on this discussion.
>
> Note that the max_len is applied to the full host page when the full
> host page is returned. Not to fragments, and not at allocation.
>
I think I am beginning to understand what the confusion is. These 32K
page fragments within the page may not belong to the same (GRO)
packet. So we cannot dma_sync the whole page at the same time.
Without setting PP_FLAG_DMA_SYNC_DEV, the driver code should be
something like this:
mapping = page_pool_get_dma_addr(page) + offset;
dma_sync_single_for_device(dev, mapping, BNXT_RX_PAGE_SIZE, bp->rx_dir);
offset may be 0, 32K, etc.
Since the PP_FLAG_DMA_SYNC_DEV logic is not aware of this offset, we
actually must do our own dma_sync and not use PP_FLAG_DMA_SYNC_DEV in
this case. Does that sound right?
Download attachment "smime.p7s" of type "application/pkcs7-signature" (4209 bytes)
Powered by blists - more mailing lists