[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210614062530.GG28343@lst.de>
Date: Mon, 14 Jun 2021 08:25:30 +0200
From: Christoph Hellwig <hch@....de>
To: Claire Chang <tientzu@...omium.org>
Cc: Rob Herring <robh+dt@...nel.org>, mpe@...erman.id.au,
Joerg Roedel <joro@...tes.org>, Will Deacon <will@...nel.org>,
Frank Rowand <frowand.list@...il.com>,
Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>,
boris.ostrovsky@...cle.com, jgross@...e.com,
Christoph Hellwig <hch@....de>,
Marek Szyprowski <m.szyprowski@...sung.com>,
benh@...nel.crashing.org, paulus@...ba.org,
"list@....net:IOMMU DRIVERS" <iommu@...ts.linux-foundation.org>,
sstabellini@...nel.org, Robin Murphy <robin.murphy@....com>,
grant.likely@....com, xypron.glpk@....de,
Thierry Reding <treding@...dia.com>, mingo@...nel.org,
bauerman@...ux.ibm.com, peterz@...radead.org,
Greg KH <gregkh@...uxfoundation.org>,
Saravana Kannan <saravanak@...gle.com>,
"Rafael J . Wysocki" <rafael.j.wysocki@...el.com>,
heikki.krogerus@...ux.intel.com,
Andy Shevchenko <andriy.shevchenko@...ux.intel.com>,
Randy Dunlap <rdunlap@...radead.org>,
Dan Williams <dan.j.williams@...el.com>,
Bartosz Golaszewski <bgolaszewski@...libre.com>,
linux-devicetree <devicetree@...r.kernel.org>,
lkml <linux-kernel@...r.kernel.org>,
linuxppc-dev@...ts.ozlabs.org, xen-devel@...ts.xenproject.org,
Nicolas Boichat <drinkcat@...omium.org>,
Jim Quinlan <james.quinlan@...adcom.com>, tfiga@...omium.org,
bskeggs@...hat.com, bhelgaas@...gle.com, chris@...is-wilson.co.uk,
daniel@...ll.ch, airlied@...ux.ie, dri-devel@...ts.freedesktop.org,
intel-gfx@...ts.freedesktop.org, jani.nikula@...ux.intel.com,
jxgao@...gle.com, joonas.lahtinen@...ux.intel.com,
linux-pci@...r.kernel.org, maarten.lankhorst@...ux.intel.com,
matthew.auld@...el.com, rodrigo.vivi@...el.com,
thomas.hellstrom@...ux.intel.com
Subject: Re: [PATCH v9 07/14] swiotlb: Bounce data from/to restricted DMA
pool if available
On Fri, Jun 11, 2021 at 11:26:52PM +0800, Claire Chang wrote:
> Regardless of swiotlb setting, the restricted DMA pool is preferred if
> available.
>
> The restricted DMA pools provide a basic level of protection against the
> DMA overwriting buffer contents at unexpected times. However, to protect
> against general data leakage and system memory corruption, the system
> needs to provide a way to lock down the memory access, e.g., MPU.
>
> Note that is_dev_swiotlb_force doesn't check if
> swiotlb_force == SWIOTLB_FORCE. Otherwise the memory allocation behavior
> with default swiotlb will be changed by the following patche
> ("dma-direct: Allocate memory from restricted DMA pool if available").
>
> Signed-off-by: Claire Chang <tientzu@...omium.org>
> ---
> include/linux/swiotlb.h | 10 +++++++++-
> kernel/dma/direct.c | 3 ++-
> kernel/dma/direct.h | 3 ++-
> kernel/dma/swiotlb.c | 1 +
> 4 files changed, 14 insertions(+), 3 deletions(-)
>
> diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
> index 06cf17a80f5c..8200c100fe10 100644
> --- a/include/linux/swiotlb.h
> +++ b/include/linux/swiotlb.h
> @@ -85,6 +85,7 @@ extern enum swiotlb_force swiotlb_force;
> * unmap calls.
> * @debugfs: The dentry to debugfs.
> * @late_alloc: %true if allocated using the page allocator
> + * @force_swiotlb: %true if swiotlb is forced
> */
> struct io_tlb_mem {
> phys_addr_t start;
> @@ -95,6 +96,7 @@ struct io_tlb_mem {
> spinlock_t lock;
> struct dentry *debugfs;
> bool late_alloc;
> + bool force_swiotlb;
> struct io_tlb_slot {
> phys_addr_t orig_addr;
> size_t alloc_size;
> @@ -115,6 +117,11 @@ static inline void swiotlb_set_io_tlb_default_mem(struct device *dev)
> dev->dma_io_tlb_mem = io_tlb_default_mem;
> }
>
> +static inline bool is_dev_swiotlb_force(struct device *dev)
> +{
> + return dev->dma_io_tlb_mem->force_swiotlb;
> +}
> +
> void __init swiotlb_exit(void);
> unsigned int swiotlb_max_segment(void);
> size_t swiotlb_max_mapping_size(struct device *dev);
> @@ -126,8 +133,9 @@ static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
> {
> return false;
> }
> -static inline void swiotlb_set_io_tlb_default_mem(struct device *dev)
> +static inline bool is_dev_swiotlb_force(struct device *dev)
> {
> + return false;
> }
> static inline void swiotlb_exit(void)
> {
> diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
> index 7a88c34d0867..078f7087e466 100644
> --- a/kernel/dma/direct.c
> +++ b/kernel/dma/direct.c
> @@ -496,7 +496,8 @@ size_t dma_direct_max_mapping_size(struct device *dev)
> {
> /* If SWIOTLB is active, use its maximum mapping size */
> if (is_swiotlb_active(dev) &&
> - (dma_addressing_limited(dev) || swiotlb_force == SWIOTLB_FORCE))
> + (dma_addressing_limited(dev) || swiotlb_force == SWIOTLB_FORCE ||
> + is_dev_swiotlb_force(dev)))
I think we can remove the extra swiotlb_force check here if the
swiotlb_force setting is propagated into io_tlb_default_mem->force
when that is initialized. This avoids an extra check in the fast path.
> - if (unlikely(swiotlb_force == SWIOTLB_FORCE))
> + if (unlikely(swiotlb_force == SWIOTLB_FORCE) ||
> + is_dev_swiotlb_force(dev))
Same here.
Powered by blists - more mailing lists