[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <76840b40fcf26a65467931a73f236982ad39989c.camel@mediatek.com>
Date: Mon, 1 Nov 2021 20:20:58 +0800
From: Walter Wu <walter-zh.wu@...iatek.com>
To: Ard Biesheuvel <ardb@...nel.org>
CC: Christoph Hellwig <hch@....de>,
Marek Szyprowski <m.szyprowski@...sung.com>,
Robin Murphy <robin.murphy@....com>,
"Matthias Brugger" <matthias.bgg@...il.com>,
Linux IOMMU <iommu@...ts.linux-foundation.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Linux ARM <linux-arm-kernel@...ts.infradead.org>,
wsd_upstream <wsd_upstream@...iatek.com>,
<linux-mediatek@...ts.infradead.org>,
"Andrew Morton" <akpm@...ux-foundation.org>
Subject: Re: [PATCH] dma-direct: fix DMA_ATTR_NO_KERNEL_MAPPING
Hi Ard,
On Mon, 2021-11-01 at 09:34 +0100, Ard Biesheuvel wrote:
> On Mon, 1 Nov 2021 at 04:17, Walter Wu <walter-zh.wu@...iatek.com>
> wrote:
> >
> > DMA_ATTR_NO_KERNEL_MAPPING is to avoid creating a kernel mapping
> > for the allocated buffer, but current implementation is that
> > PTE of allocated buffer in kernel page table is valid. So we
> > should set invalid for PTE of allocate buffer so that there are
> > no kernel mapping for the allocated buffer.
> >
> > In some cases, we don't hope the allocated buffer to be read
> > by cpu or speculative execution, so we use
> > DMA_ATTR_NO_KERNEL_MAPPING
> > to get no kernel mapping in order to achieve this goal.
> >
> > Signed-off-by: Walter Wu <walter-zh.wu@...iatek.com>
> > Cc: Christoph Hellwig <hch@....de>
> > Cc: Marek Szyprowski <m.szyprowski@...sung.com>
> > Cc: Robin Murphy <robin.murphy@....com>
> > Cc: Matthias Brugger <matthias.bgg@...il.com>
> > Cc: Andrew Morton <akpm@...ux-foundation.org>
> > ---
> > kernel/dma/direct.c | 8 ++++++++
> > 1 file changed, 8 insertions(+)
> >
> > diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
> > index 4c6c5e0635e3..aa10b4c5d762 100644
> > --- a/kernel/dma/direct.c
> > +++ b/kernel/dma/direct.c
> > @@ -13,6 +13,7 @@
> > #include <linux/vmalloc.h>
> > #include <linux/set_memory.h>
> > #include <linux/slab.h>
> > +#include <asm/cacheflush.h>
> > #include "direct.h"
> >
> > /*
> > @@ -169,6 +170,9 @@ void *dma_direct_alloc(struct device *dev,
> > size_t size,
> > if (!PageHighMem(page))
> > arch_dma_prep_coherent(page, size);
> > *dma_handle = phys_to_dma_direct(dev,
> > page_to_phys(page));
> > + /* remove kernel mapping for pages */
> > + set_memory_valid((unsigned
> > long)phys_to_virt(dma_to_phys(dev, *dma_handle)),
> > + size >> PAGE_SHIFT, 0);
>
> This only works if the memory is mapped at page granularity in the
> linear region, and you cannot rely on that. Many architectures prefer
> block mappings for the linear region, and arm64 will only use page
> mappings if rodata=full is set (which is set by default but can be
> overridden on the kernel command line)
>
We mainly want to solve arm64 arch. RODATA_FULL_DEFAULT_ENABLED should
be the arm64 config. If we use CONFIG_RODATA_FULL_DEFAULT_ENABLED to
check whether it call set_memory_valid(). It should avoid other
architectures. Do you think this method is work?
Thanks for your explaination and suggestion.
Walter
>
> > /* return the page pointer as the opaque cookie */
> > return page;
> > }
> > @@ -278,6 +282,10 @@ void dma_direct_free(struct device *dev,
> > size_t size,
> >
> > if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) &&
> > !force_dma_unencrypted(dev) &&
> > !is_swiotlb_for_alloc(dev)) {
> > + size = PAGE_ALIGN(size);
> > + /* create kernel mapping for pages */
> > + set_memory_valid((unsigned
> > long)phys_to_virt(dma_to_phys(dev, dma_addr)),
> > + size >> PAGE_SHIFT, 1);
> > /* cpu_addr is a struct page cookie, not a kernel
> > address */
> > dma_free_contiguous(dev, cpu_addr, size);
> > return;
> > --
> > 2.18.0
> >
> >
> > _______________________________________________
> > linux-arm-kernel mailing list
> > linux-arm-kernel@...ts.infradead.org
> >
https://urldefense.com/v3/__http://lists.infradead.org/mailman/listinfo/linux-arm-kernel__;!!CTRNKA9wMg0ARbw!16dLCjnvtRkaRLeCO9AQ7Fund5XL0FicZmeVaU3-WkFymr-0lbITfzwrvoJpiHlqnqIu-g$
> >
Powered by blists - more mailing lists