[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4c72db01-448a-4bda-89e0-9c92a2f89154@amd.com>
Date: Fri, 6 Sep 2024 00:54:25 +0700
From: "Suthikulpanit, Suravee" <suravee.suthikulpanit@....com>
To: Jason Gunthorpe <jgg@...dia.com>
Cc: linux-kernel@...r.kernel.org, iommu@...ts.linux.dev, joro@...tes.org,
robin.murphy@....com, vasant.hegde@....com, ubizjak@...il.com,
jon.grimm@....com, santosh.shukla@....com, pandoh@...gle.com,
kumaranand@...gle.com
Subject: Re: [PATCH v2 3/5] iommu/amd: Introduce helper functions to access
and update 256-bit DTE
Hi,
On 8/30/2024 2:28 AM, Jason Gunthorpe wrote:
> On Thu, Aug 29, 2024 at 06:07:24PM +0000, Suravee Suthikulpanit wrote:
>
>> diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
>> index 994ed02842b9..93bca5c68bca 100644
>> --- a/drivers/iommu/amd/iommu.c
>> +++ b/drivers/iommu/amd/iommu.c
>> @@ -85,6 +85,47 @@ static void set_dte_entry(struct amd_iommu *iommu,
>> *
>> ****************************************************************************/
>>
>> +static void update_dte256(struct amd_iommu *iommu, struct iommu_dev_data *dev_data,
>> + struct dev_table_entry *new)
>> +{
>> + struct dev_table_entry *dev_table = get_dev_table(iommu);
>> + struct dev_table_entry *ptr = &dev_table[dev_data->devid];
>> + struct dev_table_entry old;
>> + u128 tmp;
>> +
>> + down_write(&dev_data->dte_sem);
>
> This locking is too narrow, you need the critical region to span from
> the get_dte256() till the update_dte256() because the get is
> retrieving the value written by set_dte_irq_entry(), and it must not
> change while the new DTE is worked on.
Ok.
> I suggest you copy the IRQ data here in this function under the lock
> from old to new and then store it so it is always fresh.
>
> Ideally you would remove get_dte256() because the driver *really*
> shouldn't be changing the DTE in some way that already assumes
> something is in the DTE (for instance my remarks on the nesting work)
>
> Really the only reason to read the DTE is the get the IRQ data..
I plan to use get_dte256() helper function to extract DTE for various
purposes. Getting the IRQ data is only one use case. There are other
fields, which are programmed early in the driver init phrase (i.e.
DTE[96:106]).
>> + old.data128[0] = ptr->data128[0];
>> + old.data128[1] = ptr->data128[1];
>> +
>> + tmp = cmpxchg128(&ptr->data128[1], old.data128[1], new->data128[1]);
>> + if (tmp == old.data128[1]) {
>> + if (!try_cmpxchg128(&ptr->data128[0], &old.data128[0], new->data128[0])) {
>> + /* Restore hi 128-bit */
>> + cmpxchg128(&ptr->data128[1], new->data128[1], tmp);
>
> I don't think you should restore, this should reflect a locking error
> but we still need to move forward and put some kind of correct
> data.. The code can't go backwards so it should try to move forwards..
In case of error, what if we pr_warn and put the device in blocking mode
since we need to prevent malicious DMAs.
> On ordering, I don't know, is this OK?
>
> If you are leaving/entering nesting mode I think you have to write the
> [2] value in the right sequence, you don't want to have the viommu
> enabled unless the host page table is setup properly. So [2] is
> written last when enabling, and first when disabling. Flushes required
> after each write to ensure the HW doesn't see a cross-128 word bit
> tear.
> > GuestPagingMode also has to be sequenced correctly, the GCR3 table
> pointer should be invalid when it is changed, meaning you have to
> write it and flush before storing the GCR3 table, and the reverse to
> undo it.
>
> The ordering, including when DTE flushes are needed, is pretty
> hard. This is much simpler than, say, ARM, so I think you could open
> code it, but it should be a pretty sizable bit of logic to figure out
> what to do.
IOMMU hardware do not do partial interpret of the DTE and SW ensure DTE
flush after updating the DTE. Therefore, ordering should not be of a
concern here as long as the driver correctly program the entry.
>> +static void get_dte256(struct amd_iommu *iommu, struct iommu_dev_data *dev_data,
>> + struct dev_table_entry *dte)
>> +{
>> + struct dev_table_entry *ptr;
>> + struct dev_table_entry *dev_table = get_dev_table(iommu);
>> +
>> + ptr = &dev_table[dev_data->devid];
>> +
>> + down_read(&dev_data->dte_sem);
>> + dte->data128[0] = ptr->data128[0];
>> + dte->data128[1] = ptr->data128[1];
>> + up_read(&dev_data->dte_sem);
>
> I don't think you need a rwsem either. As above, you shouldn't be
> reading anyhow to build a DTE, and you can't allow the interrupt
> update to run regardless, so a simple spinlock would be sufficient and
> faster, I think.
Ok.
>> @@ -2681,16 +2732,16 @@ static int amd_iommu_set_dirty_tracking(struct iommu_domain *domain,
>> }
>>
>> list_for_each_entry(dev_data, &pdomain->dev_list, list) {
>> - iommu = get_amd_iommu_from_dev_data(dev_data);
>> + struct dev_table_entry dte;
>>
>> - dev_table = get_dev_table(iommu);
>> - pte_root = dev_table[dev_data->devid].data[0];
>> + iommu = get_amd_iommu_from_dev_data(dev_data);
>> + get_dte256(iommu, dev_data, &dte);
>>
>> - pte_root = (enable ? pte_root | DTE_FLAG_HAD :
>> - pte_root & ~DTE_FLAG_HAD);
>> + dte.data[0] = (enable ? dte.data[0] | DTE_FLAG_HAD :
>> + dte.data[0] & ~DTE_FLAG_HAD);
>>
>
> And this doesn't need all the logic just to flip one bit in a single
> 64bit quantity..
Ok
Thanks,
Suravee
Powered by blists - more mailing lists