lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <c98d594b465d3d8228743bc54017b8c456695219.camel@vmware.com>
Date:   Fri, 6 Dec 2019 14:10:59 +0000
From:   Thomas Hellstrom <thellstrom@...are.com>
To:     "hch@....de" <hch@....de>,
        "christian.koenig@....com" <christian.koenig@....com>
CC:     "thomas.lendacky@....com" <thomas.lendacky@....com>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "linux-mm@...ck.org" <linux-mm@...ck.org>,
        "iommu@...ts.linux-foundation.org" <iommu@...ts.linux-foundation.org>
Subject: Re: [PATCH 2/2] dma-mapping: force unencryped devices are always
 addressing limited

Hi, Christoph.


On Wed, 2019-12-04 at 14:03 +0100, Christoph Hellwig wrote:
> Devices that are forced to DMA through swiotlb need to be treated as
> if
> they are addressing limited.
> 
> Signed-off-by: Christoph Hellwig <hch@....de>
> ---
>  include/linux/dma-direct.h | 1 +
>  kernel/dma/direct.c        | 8 ++++++--
>  kernel/dma/mapping.c       | 3 +++
>  3 files changed, 10 insertions(+), 2 deletions(-)
> 
> diff --git a/include/linux/dma-direct.h b/include/linux/dma-direct.h
> index 24b8684aa21d..83aac21434c6 100644
> --- a/include/linux/dma-direct.h
> +++ b/include/linux/dma-direct.h
> @@ -85,4 +85,5 @@ int dma_direct_mmap(struct device *dev, struct
> vm_area_struct *vma,
>  		void *cpu_addr, dma_addr_t dma_addr, size_t size,
>  		unsigned long attrs);
>  int dma_direct_supported(struct device *dev, u64 mask);
> +bool dma_direct_addressing_limited(struct device *dev);
>  #endif /* _LINUX_DMA_DIRECT_H */
> diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
> index 6af7ae83c4ad..450f3abe5cb5 100644
> --- a/kernel/dma/direct.c
> +++ b/kernel/dma/direct.c
> @@ -497,11 +497,15 @@ int dma_direct_supported(struct device *dev,
> u64 mask)
>  	return mask >= __phys_to_dma(dev, min_mask);
>  }
>  
> +bool dma_direct_addressing_limited(struct device *dev)
> +{
> +	return force_dma_unencrypted(dev) || swiotlb_force ==
> SWIOTLB_FORCE;
> +}
> +
>  size_t dma_direct_max_mapping_size(struct device *dev)
>  {
>  	/* If SWIOTLB is active, use its maximum mapping size */
> -	if (is_swiotlb_active() &&
> -	    (dma_addressing_limited(dev) || swiotlb_force ==
> SWIOTLB_FORCE))
> +	if (is_swiotlb_active() && dma_addressing_limited(dev))
>  		return swiotlb_max_mapping_size(dev);
>  	return SIZE_MAX;
>  }
> diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c
> index 1dbe6d725962..ebc60633d89a 100644
> --- a/kernel/dma/mapping.c
> +++ b/kernel/dma/mapping.c
> @@ -416,6 +416,9 @@ EXPORT_SYMBOL_GPL(dma_get_merge_boundary);
>   */
>  bool dma_addressing_limited(struct device *dev)
>  {
> +	if (dma_is_direct(get_dma_ops(dev)) &&
> +	    dma_direct_addressing_limited(dev))
> +		return true;

This works fine for vmwgfx, for which the below expression is always 0.
But it looks like the only current user of dma_addressing_limited
outside of the dma code, radeon, actually wants only the below
expression to force GFP_DMA32 page allocations when the devices have
limited dma address space. Perhaps Christian can elaborate on that.

So in the end it looks like we have two different use cases. One to
force coherent memory (vmwgfx, possibly other grahpics drivers) or
reduced queue depth (vmw_pvscsi) when we have bounce-buffering.

The other one is to force GFP_DMA32 page allocation when the device
dma-addressing is limited. Perhaps this mode can be replaced by using
dma_coherent memory and stripped that functionality from TTM?

>  	return min_not_zero(dma_get_mask(dev), dev->bus_dma_limit) <
>  			    dma_get_required_mask(dev);
>  }


Thanks,
Thomas

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ