lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20250117084449.6cfd68b3.alex.williamson@redhat.com>
Date: Fri, 17 Jan 2025 08:44:49 -0500
From: Alex Williamson <alex.williamson@...hat.com>
To: Wencheng Yang <east.moutain.yang@...il.com>
Cc: Joerg Roedel <joro@...tes.org>, Suravee Suthikulpanit
 <suravee.suthikulpanit@....com>, Will Deacon <will@...nel.org>, Robin
 Murphy <robin.murphy@....com>, iommu@...ts.linux.dev,
 linux-kernel@...r.kernel.org, kvm@...r.kernel.org
Subject: Re: [PATCH v2] drviers/iommu/amd: support P2P access through IOMMU
 when SME is enabled

On Fri, 17 Jan 2025 15:14:18 +0800
Wencheng Yang <east.moutain.yang@...il.com> wrote:

> When SME is enabled, memory encryption bit is set in IOMMU page table
> pte entry, it works fine if the pfn of the pte entry is memory.
> However, if the pfn is MMIO address, for example, map other device's mmio
> space to its io page table, in such situation, setting memory encryption
> bit in pte would cause P2P failure.
> 
> Clear memory encryption bit in io page table if the mapping is MMIO
> rather than memory.
> 
> Signed-off-by: Wencheng Yang <east.moutain.yang@...il.com>
> ---
>  drivers/iommu/amd/amd_iommu_types.h | 7 ++++---
>  drivers/iommu/amd/io_pgtable.c      | 2 ++
>  drivers/iommu/amd/io_pgtable_v2.c   | 5 ++++-
>  drivers/iommu/amd/iommu.c           | 2 ++
>  drivers/vfio/vfio_iommu_type1.c     | 4 +++-
>  include/uapi/linux/vfio.h           | 1 +
>  6 files changed, 16 insertions(+), 5 deletions(-)

This needs to:

 - Be split into separate IOMMU vs VFIO patches
 - Consider and consolidate with other IOMMU implementations of the same
 - Provide introspection to userspace relative to the availability of
   the resulting mapping option

It's also not clear to me that the user should be responsible for
setting this flag versus something in the VFIO or IOMMU layer.  For
example what are the implications of the user setting this flag
incorrectly (not just failing to set it for MMIO, but using it for RAM)?
Thanks,

Alex

> 
> diff --git a/drivers/iommu/amd/amd_iommu_types.h b/drivers/iommu/amd/amd_iommu_types.h
> index fdb0357e0bb9..b0f055200cf3 100644
> --- a/drivers/iommu/amd/amd_iommu_types.h
> +++ b/drivers/iommu/amd/amd_iommu_types.h
> @@ -434,9 +434,10 @@
>  #define IOMMU_PTE_PAGE(pte) (iommu_phys_to_virt((pte) & IOMMU_PAGE_MASK))
>  #define IOMMU_PTE_MODE(pte) (((pte) >> 9) & 0x07)
>  
> -#define IOMMU_PROT_MASK 0x03
> -#define IOMMU_PROT_IR 0x01
> -#define IOMMU_PROT_IW 0x02
> +#define IOMMU_PROT_MASK 0x07
> +#define IOMMU_PROT_IR   0x01
> +#define IOMMU_PROT_IW   0x02
> +#define IOMMU_PROT_MMIO 0x04
>  
>  #define IOMMU_UNITY_MAP_FLAG_EXCL_RANGE	(1 << 2)
>  
> diff --git a/drivers/iommu/amd/io_pgtable.c b/drivers/iommu/amd/io_pgtable.c
> index f3399087859f..dff887958a56 100644
> --- a/drivers/iommu/amd/io_pgtable.c
> +++ b/drivers/iommu/amd/io_pgtable.c
> @@ -373,6 +373,8 @@ static int iommu_v1_map_pages(struct io_pgtable_ops *ops, unsigned long iova,
>  			__pte |= IOMMU_PTE_IR;
>  		if (prot & IOMMU_PROT_IW)
>  			__pte |= IOMMU_PTE_IW;
> +		if (prot & IOMMU_PROT_MMIO)
> +			__pte = __sme_clr(__pte);
>  
>  		for (i = 0; i < count; ++i)
>  			pte[i] = __pte;
> diff --git a/drivers/iommu/amd/io_pgtable_v2.c b/drivers/iommu/amd/io_pgtable_v2.c
> index c616de2c5926..55f969727dea 100644
> --- a/drivers/iommu/amd/io_pgtable_v2.c
> +++ b/drivers/iommu/amd/io_pgtable_v2.c
> @@ -65,7 +65,10 @@ static u64 set_pte_attr(u64 paddr, u64 pg_size, int prot)
>  {
>  	u64 pte;
>  
> -	pte = __sme_set(paddr & PM_ADDR_MASK);
> +	pte = paddr & PM_ADDR_MASK;
> +	if (!(prot & IOMMU_PROT_MMIO))
> +		pte = __sme_set(pte);
> +
>  	pte |= IOMMU_PAGE_PRESENT | IOMMU_PAGE_USER;
>  	pte |= IOMMU_PAGE_ACCESS | IOMMU_PAGE_DIRTY;
>  
> diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
> index 16f40b8000d7..9194ad681504 100644
> --- a/drivers/iommu/amd/iommu.c
> +++ b/drivers/iommu/amd/iommu.c
> @@ -2578,6 +2578,8 @@ static int amd_iommu_map_pages(struct iommu_domain *dom, unsigned long iova,
>  		prot |= IOMMU_PROT_IR;
>  	if (iommu_prot & IOMMU_WRITE)
>  		prot |= IOMMU_PROT_IW;
> +	if (iommu_prot & IOMMU_MMIO)
> +		prot |= IOMMU_PROT_MMIO;
>  
>  	if (ops->map_pages) {
>  		ret = ops->map_pages(ops, iova, paddr, pgsize,
> diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
> index 50ebc9593c9d..08be1ef8514b 100644
> --- a/drivers/vfio/vfio_iommu_type1.c
> +++ b/drivers/vfio/vfio_iommu_type1.c
> @@ -1557,6 +1557,8 @@ static int vfio_dma_do_map(struct vfio_iommu *iommu,
>  		prot |= IOMMU_WRITE;
>  	if (map->flags & VFIO_DMA_MAP_FLAG_READ)
>  		prot |= IOMMU_READ;
> +    if (map->flags & VFIO_DMA_MAP_FLAG_MMIO)
> +        prot |= IOMMU_MMIO;
>  
>  	if ((prot && set_vaddr) || (!prot && !set_vaddr))
>  		return -EINVAL;
> @@ -2801,7 +2803,7 @@ static int vfio_iommu_type1_map_dma(struct vfio_iommu *iommu,
>  	struct vfio_iommu_type1_dma_map map;
>  	unsigned long minsz;
>  	uint32_t mask = VFIO_DMA_MAP_FLAG_READ | VFIO_DMA_MAP_FLAG_WRITE |
> -			VFIO_DMA_MAP_FLAG_VADDR;
> +			VFIO_DMA_MAP_FLAG_VADDR | VFIO_DMA_MAP_FLAG_MMIO;
>  
>  	minsz = offsetofend(struct vfio_iommu_type1_dma_map, size);
>  
> diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
> index c8dbf8219c4f..68002c8f1157 100644
> --- a/include/uapi/linux/vfio.h
> +++ b/include/uapi/linux/vfio.h
> @@ -1560,6 +1560,7 @@ struct vfio_iommu_type1_dma_map {
>  #define VFIO_DMA_MAP_FLAG_READ (1 << 0)		/* readable from device */
>  #define VFIO_DMA_MAP_FLAG_WRITE (1 << 1)	/* writable from device */
>  #define VFIO_DMA_MAP_FLAG_VADDR (1 << 2)
> +#define VFIO_DMA_MAP_FLAG_MMIO (1 << 3)     /* map of mmio */
>  	__u64	vaddr;				/* Process virtual address */
>  	__u64	iova;				/* IO virtual address */
>  	__u64	size;				/* Size of mapping (bytes) */


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ