[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200224215327.GB11565@iweiny-DESK2.sc.intel.com>
Date: Mon, 24 Feb 2020 13:53:28 -0800
From: Ira Weiny <ira.weiny@...el.com>
To: Christoph Hellwig <hch@....de>
Cc: Jonas Bonn <jonas@...thpole.se>,
Stefan Kristiansson <stefan.kristiansson@...nalahti.fi>,
Stafford Horne <shorne@...il.com>,
Marek Szyprowski <m.szyprowski@...sung.com>,
Robin Murphy <robin.murphy@....com>,
Will Deacon <will@...nel.org>,
Mark Rutland <mark.rutland@....com>,
openrisc@...ts.librecores.org, iommu@...ts.linux-foundation.org,
linux-arm-kernel@...ts.infradead.org, linux-arch@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 4/5] dma-direct: provide a arch_dma_clear_uncached hook
On Mon, Feb 24, 2020 at 11:44:44AM -0800, Christoph Hellwig wrote:
> This allows the arch code to reset the page tables to cached access when
> freeing a dma coherent allocation that was set to uncached using
> arch_dma_set_uncached.
>
> Signed-off-by: Christoph Hellwig <hch@....de>
> ---
> arch/Kconfig | 7 +++++++
> include/linux/dma-noncoherent.h | 1 +
> kernel/dma/direct.c | 2 ++
> 3 files changed, 10 insertions(+)
>
> diff --git a/arch/Kconfig b/arch/Kconfig
> index 090cfe0c82a7..c26302f90c96 100644
> --- a/arch/Kconfig
> +++ b/arch/Kconfig
> @@ -255,6 +255,13 @@ config ARCH_HAS_SET_DIRECT_MAP
> config ARCH_HAS_DMA_SET_UNCACHED
> bool
>
> +#
> +# Select if the architectures provides the arch_dma_clear_uncached symbol
> +# to undo an in-place page table remap for uncached access.
> +#
> +config ARCH_HAS_DMA_CLEAR_UNCACHED
> + bool
> +
> # Select if arch init_task must go in the __init_task_data section
> config ARCH_TASK_STRUCT_ON_STACK
> bool
> diff --git a/include/linux/dma-noncoherent.h b/include/linux/dma-noncoherent.h
> index 1a4039506673..b59f1b6be3e9 100644
> --- a/include/linux/dma-noncoherent.h
> +++ b/include/linux/dma-noncoherent.h
> @@ -109,5 +109,6 @@ static inline void arch_dma_prep_coherent(struct page *page, size_t size)
> #endif /* CONFIG_ARCH_HAS_DMA_PREP_COHERENT */
>
> void *arch_dma_set_uncached(void *addr, size_t size);
> +void arch_dma_clear_uncached(void *addr, size_t size);
>
> #endif /* _LINUX_DMA_NONCOHERENT_H */
> diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
> index f01a8191fd59..a8560052a915 100644
> --- a/kernel/dma/direct.c
> +++ b/kernel/dma/direct.c
> @@ -219,6 +219,8 @@ void dma_direct_free_pages(struct device *dev, size_t size, void *cpu_addr,
>
> if (IS_ENABLED(CONFIG_DMA_REMAP) && is_vmalloc_addr(cpu_addr))
> vunmap(cpu_addr);
> + else if (IS_ENABLED(CONFIG_ARCH_HAS_DMA_CLEAR_UNCACHED))
> + arch_dma_clear_uncached(cpu_addr, size);
Isn't using arch_dma_clear_uncached() before patch 5 going to break
bisectability?
Ira
>
> dma_free_contiguous(dev, dma_direct_to_page(dev, dma_addr), size);
> }
> --
> 2.24.1
>
Powered by blists - more mailing lists