[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAC_iWjJQepZWVrY8BHgGgRVS1V_fTtGe-i=r8X5z465td3TvbA@mail.gmail.com>
Date: Thu, 17 Aug 2023 19:59:37 +0300
From: Ilias Apalodimas <ilias.apalodimas@...aro.org>
To: Jakub Kicinski <kuba@...nel.org>
Cc: Mina Almasry <almasrymina@...gle.com>, Yunsheng Lin <linyunsheng@...wei.com>, davem@...emloft.net,
pabeni@...hat.com, netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
Lorenzo Bianconi <lorenzo@...nel.org>, Alexander Duyck <alexander.duyck@...il.com>,
Liang Chen <liangchen.linux@...il.com>,
Alexander Lobakin <aleksander.lobakin@...el.com>, Saeed Mahameed <saeedm@...dia.com>,
Leon Romanovsky <leon@...nel.org>, Eric Dumazet <edumazet@...gle.com>,
Jesper Dangaard Brouer <hawk@...nel.org>
Subject: Re: [PATCH net-next v7 1/6] page_pool: frag API support for 32-bit
arch with 64-bit DMA
Hi Jakub,
On Thu, 17 Aug 2023 at 19:15, Jakub Kicinski <kuba@...nel.org> wrote:
>
> On Thu, 17 Aug 2023 16:57:16 +0300 Ilias Apalodimas wrote:
> > Why should we care about this? Even an architecture that's 32-bit and
> > has a 64bit DMA should be allowed to split the pages internally if it
> > decides to do so. The trick that drivers usually do is elevate the
> > page refcnt and deal with that internally.
>
> Can we assume the DMA mapping of page pool is page aligned? We should
> be, right?
Yes
> That means we're storing 12 bits of 0 at the lower end.
> So even with 32b of space we can easily store addresses for 32b+12b =>
> 16TB of memory. "Ought to be enough" to paraphrase Bill G, and the
> problem is only in our heads?
Do you mean moving the pp_frag_count there? I was questioning the
need to have PP_FLAG_PAGE_SPLIT_IN_DRIVER overall. With Yunshengs
patches such a platform would allocate a page, so why should we
prevent it from splitting it internally?
Thanks
/Ilias
>
> Before we go that way - Mina, are the dma-buf "chunks" you're working
> with going to be fragment-able? Or rather can driver and/or core take
> multiple references on a single buffer?
Powered by blists - more mailing lists