[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <DB9PR04MB9284735F735FAFC3EAB810D587F9A@DB9PR04MB9284.eurprd04.prod.outlook.com>
Date: Wed, 20 Sep 2023 10:02:35 +0000
From: Hui Fang <hui.fang@....com>
To: Tomasz Figa <tfiga@...omium.org>, Christoph Hellwig <hch@....de>,
Robin Murphy <robin.murphy@....com>
CC: "m.szyprowski@...sung.com" <m.szyprowski@...sung.com>,
"mchehab@...nel.org" <mchehab@...nel.org>,
"linux-media@...r.kernel.org" <linux-media@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Anle Pan <anle.pan@....com>, Xuegang Liu <xuegang.liu@....com>
Subject: RE: [EXT] Re: [PATCH] MA-21654 Use dma_alloc_pages in
vb2_dma_sg_alloc_compacted
On Thu, Sep 20, 2023 at 15:41 PM Tomasz Figa <tfiga@...omium.org> wrote:
> Is CONFIG_ZONE_DMA32 really the factor that triggers the problem? My
> understanding was that the problem was that the hardware has 32-bit DMA,
> but the system has physical memory at addresses beyond the first 4G.
Yes, you are right. But CONFIG_ZONE_DMA32 may affect swiotlb_init_remap().
In arch/arm64/mm/init.c
static void __init zone_sizes_init(void)
{
......
#ifdef CONFIG_ZONE_DMA32
max_zone_pfns[ZONE_DMA32] = disable_dma32 ? 0 : PFN_DOWN(dma32_phys_limit);
if (!arm64_dma_phys_limit)
arm64_dma_phys_limit = dma32_phys_limit;
#endif
......
}
void __init mem_init(void)
{
swiotlb_init(max_pfn > PFN_DOWN(arm64_dma_phys_limit), SWIOTLB_VERBOSE);
}
In kernel/dma/swiotlb.c
void __init swiotlb_init(bool addressing_limit, unsigned int flags)
{
swiotlb_init_remap(addressing_limit, flags, NULL);
}
void __init swiotlb_init_remap(bool addressing_limit, unsigned int flags,
int (*remap)(void *tlb, unsigned long nslabs))
{
struct io_tlb_mem *mem = &io_tlb_default_mem;
unsigned long nslabs;
size_t alloc_size;
size_t bytes;
void *tlb;
if (!addressing_limit && !swiotlb_force_bounce)
return;
}
Also thanks for your suggestion, will refine my patch.
BRs,
Fang Hui
Powered by blists - more mailing lists