[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAK8P3a2dekzohOrHpLq6yyuaoyC4UOxxucu6kX2oddeq5Jdqfg@mail.gmail.com>
Date: Sat, 17 Apr 2021 12:31:37 +0200
From: Arnd Bergmann <arnd@...nel.org>
To: Matthew Wilcox <willy@...radead.org>
Cc: Jesper Dangaard Brouer <brouer@...hat.com>,
Grygorii Strashko <grygorii.strashko@...com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
Ilias Apalodimas <ilias.apalodimas@...aro.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-mips@...r.kernel.org" <linux-mips@...r.kernel.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
David Laight <David.Laight@...lab.com>,
Matteo Croce <mcroce@...ux.microsoft.com>,
"linuxppc-dev@...ts.ozlabs.org" <linuxppc-dev@...ts.ozlabs.org>,
Christoph Hellwig <hch@....de>,
"linux-arm-kernel@...ts.infradead.org"
<linux-arm-kernel@...ts.infradead.org>
Subject: Re: [PATCH 1/1] mm: Fix struct page layout on 32-bit systems
On Fri, Apr 16, 2021 at 5:27 PM Matthew Wilcox <willy@...radead.org> wrote:
> diff --git a/include/net/page_pool.h b/include/net/page_pool.h
> index b5b195305346..db7c7020746a 100644
> --- a/include/net/page_pool.h
> +++ b/include/net/page_pool.h
> @@ -198,7 +198,17 @@ static inline void page_pool_recycle_direct(struct page_pool *pool,
>
> static inline dma_addr_t page_pool_get_dma_addr(struct page *page)
> {
> - return page->dma_addr;
> + dma_addr_t ret = page->dma_addr[0];
> + if (sizeof(dma_addr_t) > sizeof(unsigned long))
> + ret |= (dma_addr_t)page->dma_addr[1] << 32;
> + return ret;
> +}
Have you considered using a PFN type address here? I suspect you
can prove that shifting the DMA address by PAGE_BITS would
make it fit into an 'unsigned long' on all 32-bit architectures with
64-bit dma_addr_t. This requires that page->dma_addr to be
page aligned, as well as fit into 44 bits. I recently went through the
maximum address space per architecture to define a
MAX_POSSIBLE_PHYSMEM_BITS, and none of them have more than
40 here, presumably the same is true for dma address space.
Arnd
Powered by blists - more mailing lists