lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 22 May 2023 17:17:13 -0700
From:   jsnitsel@...hat.com
To:     Suravee Suthikulpanit <suravee.suthikulpanit@....com>
Cc:     linux-kernel@...r.kernel.org, iommu@...ts.linux.dev,
        joro@...tes.org, joao.m.martins@...cle.com,
        alejandro.j.jimenez@...cle.com, boris.ostrovsky@...cle.com,
        jon.grimm@....com, santosh.shukla@....com, vasant.hegde@....com,
        kishon.vijayabraham@....com
Subject: Re: [PATCH v2 4/5] iommu/amd: Do not Invalidate IRT when disable
 IRTE caching

On Thu, May 18, 2023 at 08:55:28PM -0400, Suravee Suthikulpanit wrote:
> With the Interrupt Remapping Table cache disabled, there is no need to
> issue invalidate IRT and wait for its completion. Therefore, add logic
> to bypass the operation.
> 
> Suggested-by: Joao Martins <joao.m.martins@...cle.com>
> Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@....com>

Would it be clearer for the summary to be "iommu/amd: Do not
Invalidate IRT when IRTE caching is disabled"?

Reviewed-by: Jerry Snitselaar <jsnitsel@...hat.com>

> ---
>  drivers/iommu/amd/iommu.c | 21 +++++++++++++++------
>  1 file changed, 15 insertions(+), 6 deletions(-)
> 
> diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
> index 0c4a2796bb0a..51c2b018433d 100644
> --- a/drivers/iommu/amd/iommu.c
> +++ b/drivers/iommu/amd/iommu.c
> @@ -1273,12 +1273,24 @@ static void amd_iommu_flush_irt_all(struct amd_iommu *iommu)
>  	u32 devid;
>  	u16 last_bdf = iommu->pci_seg->last_bdf;
>  
> +	if (iommu->irtcachedis_enabled)
> +		return;
> +
>  	for (devid = 0; devid <= last_bdf; devid++)
>  		iommu_flush_irt(iommu, devid);
>  
>  	iommu_completion_wait(iommu);
>  }
>  
> +static void iommu_flush_irt_and_complete(struct amd_iommu *iommu, u16 devid)
> +{
> +	if (iommu->irtcachedis_enabled)
> +		return;
> +
> +	iommu_flush_irt(iommu, devid);
> +	iommu_completion_wait(iommu);
> +}
> +
>  void iommu_flush_all_caches(struct amd_iommu *iommu)
>  {
>  	if (iommu_feature(iommu, FEATURE_IA)) {
> @@ -3028,8 +3040,7 @@ static int modify_irte_ga(struct amd_iommu *iommu, u16 devid, int index,
>  
>  	raw_spin_unlock_irqrestore(&table->lock, flags);
>  
> -	iommu_flush_irt(iommu, devid);
> -	iommu_completion_wait(iommu);
> +	iommu_flush_irt_and_complete(iommu, devid);
>  
>  	return 0;
>  }
> @@ -3048,8 +3059,7 @@ static int modify_irte(struct amd_iommu *iommu,
>  	table->table[index] = irte->val;
>  	raw_spin_unlock_irqrestore(&table->lock, flags);
>  
> -	iommu_flush_irt(iommu, devid);
> -	iommu_completion_wait(iommu);
> +	iommu_flush_irt_and_complete(iommu, devid);
>  
>  	return 0;
>  }
> @@ -3067,8 +3077,7 @@ static void free_irte(struct amd_iommu *iommu, u16 devid, int index)
>  	iommu->irte_ops->clear_allocated(table, index);
>  	raw_spin_unlock_irqrestore(&table->lock, flags);
>  
> -	iommu_flush_irt(iommu, devid);
> -	iommu_completion_wait(iommu);
> +	iommu_flush_irt_and_complete(iommu, devid);
>  }
>  
>  static void irte_prepare(void *entry,
> -- 
> 2.31.1
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ