[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170117121459.GG18923@e104818-lin.cambridge.arm.com>
Date: Tue, 17 Jan 2017 12:14:59 +0000
From: Catalin Marinas <catalin.marinas@....com>
To: Alexander Graf <agraf@...e.de>
Cc: linux-arm-kernel@...ts.infradead.org,
Jisheng Zhang <jszhang@...vell.com>,
Geert Uytterhoeven <geert+renesas@...der.be>,
Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>,
Will Deacon <will.deacon@....com>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] arm64: Fix swiotlb fallback allocation
On Mon, Jan 16, 2017 at 12:46:33PM +0100, Alexander Graf wrote:
> Commit b67a8b29df introduced logic to skip swiotlb allocation when all memory
> is DMA accessible anyway.
>
> While this is a great idea, __dma_alloc still calls swiotlb code unconditionally
> to allocate memory when there is no CMA memory available. The swiotlb code is
> called to ensure that we at least try get_free_pages().
>
> Without initialization, swiotlb allocation code tries to access io_tlb_list
> which is NULL. That results in a stack trace like this:
>
> Unable to handle kernel NULL pointer dereference at virtual address 00000000
> [...]
> [<ffff00000845b908>] swiotlb_tbl_map_single+0xd0/0x2b0
> [<ffff00000845be94>] swiotlb_alloc_coherent+0x10c/0x198
> [<ffff000008099dc0>] __dma_alloc+0x68/0x1a8
> [<ffff000000a1b410>] drm_gem_cma_create+0x98/0x108 [drm]
> [<ffff000000abcaac>] drm_fbdev_cma_create_with_funcs+0xbc/0x368 [drm_kms_helper]
> [<ffff000000abcd84>] drm_fbdev_cma_create+0x2c/0x40 [drm_kms_helper]
> [<ffff000000abc040>] drm_fb_helper_initial_config+0x238/0x410 [drm_kms_helper]
> [<ffff000000abce88>] drm_fbdev_cma_init_with_funcs+0x98/0x160 [drm_kms_helper]
> [<ffff000000abcf90>] drm_fbdev_cma_init+0x40/0x58 [drm_kms_helper]
> [<ffff000000b47980>] vc4_kms_load+0x90/0xf0 [vc4]
> [<ffff000000b46a94>] vc4_drm_bind+0xec/0x168 [vc4]
> [...]
>
> Thankfully swiotlb code just learned how to not do allocations with the FORCE_NO
> option. This patch configures the swiotlb code to use that if we decide not to
> initialize the swiotlb framework.
>
> Fixes: b67a8b29df ("arm64: mm: only initialize swiotlb when necessary")
> Signed-off-by: Alexander Graf <agraf@...e.de>
> CC: Catalin Marinas <catalin.marinas@....com>
> CC: Jisheng Zhang <jszhang@...vell.com>
> CC: Geert Uytterhoeven <geert+renesas@...der.be>
> CC: Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>
Thanks for the fix.
BTW, I wonder whether we also need to improve the original commit
slightly, in case we get a device mask smaller than what max_pfn covers:
diff --git a/arch/arm64/mm/dma-mapping.c b/arch/arm64/mm/dma-mapping.c
index e04082700bb1..23090db2f5ba 100644
--- a/arch/arm64/mm/dma-mapping.c
+++ b/arch/arm64/mm/dma-mapping.c
@@ -349,7 +349,7 @@ static int __swiotlb_dma_supported(struct device *hwdev, u64 mask)
{
if (swiotlb)
return swiotlb_dma_supported(hwdev, mask);
- return 1;
+ return phys_to_dma(hwdev, PFN_PHYS(max_pfn) - 1) <= mask;
}
static struct dma_map_ops swiotlb_dma_ops = {
--
Catalin
Powered by blists - more mailing lists