lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240829192804.GJ3773488@nvidia.com>
Date: Thu, 29 Aug 2024 16:28:04 -0300
From: Jason Gunthorpe <jgg@...dia.com>
To: Suravee Suthikulpanit <suravee.suthikulpanit@....com>
Cc: linux-kernel@...r.kernel.org, iommu@...ts.linux.dev, joro@...tes.org,
	robin.murphy@....com, vasant.hegde@....com, ubizjak@...il.com,
	jon.grimm@....com, santosh.shukla@....com, pandoh@...gle.com,
	kumaranand@...gle.com
Subject: Re: [PATCH v2 3/5] iommu/amd: Introduce helper functions to access
 and update 256-bit DTE

On Thu, Aug 29, 2024 at 06:07:24PM +0000, Suravee Suthikulpanit wrote:

> diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
> index 994ed02842b9..93bca5c68bca 100644
> --- a/drivers/iommu/amd/iommu.c
> +++ b/drivers/iommu/amd/iommu.c
> @@ -85,6 +85,47 @@ static void set_dte_entry(struct amd_iommu *iommu,
>   *
>   ****************************************************************************/
>  
> +static void update_dte256(struct amd_iommu *iommu, struct iommu_dev_data *dev_data,
> +			  struct dev_table_entry *new)
> +{
> +	struct dev_table_entry *dev_table = get_dev_table(iommu);
> +	struct dev_table_entry *ptr = &dev_table[dev_data->devid];
> +	struct dev_table_entry old;
> +	u128 tmp;
> +
> +	down_write(&dev_data->dte_sem);

This locking is too narrow, you need the critical region to span from
the get_dte256() till the update_dte256() because the get is
retrieving the value written by set_dte_irq_entry(), and it must not
change while the new DTE is worked on.

I suggest you copy the IRQ data here in this function under the lock
from old to new and then store it so it is always fresh.

Ideally you would remove get_dte256() because the driver *really*
shouldn't be changing the DTE in some way that already assumes
something is in the DTE (for instance my remarks on the nesting work)

Really the only reason to read the DTE is the get the IRQ data..

> +	old.data128[0] = ptr->data128[0];
> +	old.data128[1] = ptr->data128[1];
> +
> +	tmp = cmpxchg128(&ptr->data128[1], old.data128[1], new->data128[1]);
> +	if (tmp == old.data128[1]) {
> +		if (!try_cmpxchg128(&ptr->data128[0], &old.data128[0], new->data128[0])) {
> +			/* Restore hi 128-bit */
> +			cmpxchg128(&ptr->data128[1], new->data128[1], tmp);

I don't think you should restore, this should reflect a locking error
but we still need to move forward and put some kind of correct
data.. The code can't go backwards so it should try to move forwards..

On ordering, I don't know, is this OK?

If you are leaving/entering nesting mode I think you have to write the
[2] value in the right sequence, you don't want to have the viommu
enabled unless the host page table is setup properly. So [2] is
written last when enabling, and first when disabling. Flushes required
after each write to ensure the HW doesn't see a cross-128 word bit
tear.

GuestPagingMode also has to be sequenced correctly, the GCR3 table
pointer should be invalid when it is changed, meaning you have to
write it and flush before storing the GCR3 table, and the reverse to
undo it.

The ordering, including when DTE flushes are needed, is pretty
hard. This is much simpler than, say, ARM, so I think you could open
code it, but it should be a pretty sizable bit of logic to figure out
what to do.

> +static void get_dte256(struct amd_iommu *iommu, struct iommu_dev_data *dev_data,
> +		      struct dev_table_entry *dte)
> +{
> +	struct dev_table_entry *ptr;
> +	struct dev_table_entry *dev_table = get_dev_table(iommu);
> +
> +	ptr = &dev_table[dev_data->devid];
> +
> +	down_read(&dev_data->dte_sem);
> +	dte->data128[0] = ptr->data128[0];
> +	dte->data128[1] = ptr->data128[1];
> +	up_read(&dev_data->dte_sem);

I don't think you need a rwsem either. As above, you shouldn't be
reading anyhow to build a DTE, and you can't allow the interrupt
update to run regardless, so a simple spinlock would be sufficient and
faster, I think.

> @@ -2681,16 +2732,16 @@ static int amd_iommu_set_dirty_tracking(struct iommu_domain *domain,
>  	}
>  
>  	list_for_each_entry(dev_data, &pdomain->dev_list, list) {
> -		iommu = get_amd_iommu_from_dev_data(dev_data);
> +		struct dev_table_entry dte;
>  
> -		dev_table = get_dev_table(iommu);
> -		pte_root = dev_table[dev_data->devid].data[0];
> +		iommu = get_amd_iommu_from_dev_data(dev_data);
> +		get_dte256(iommu, dev_data, &dte);
>  
> -		pte_root = (enable ? pte_root | DTE_FLAG_HAD :
> -				     pte_root & ~DTE_FLAG_HAD);
> +		dte.data[0] = (enable ? dte.data[0] | DTE_FLAG_HAD :
> +				     dte.data[0] & ~DTE_FLAG_HAD);
>

And this doesn't need all the logic just to flip one bit in a single
64bit quantity..

Jason

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ