[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <DA679904-017D-477A-9284-46644D6F9858@vmware.com>
Date: Tue, 15 Jun 2021 18:51:08 +0000
From: Nadav Amit <namit@...are.com>
To: Robin Murphy <robin.murphy@....com>
CC: Joerg Roedel <joro@...tes.org>,
LKML <linux-kernel@...r.kernel.org>,
"iommu@...ts.linux-foundation.org" <iommu@...ts.linux-foundation.org>,
Jiajun Cao <caojiajun@...are.com>,
Will Deacon <will@...nel.org>,
"suravee.suthikulpanit@....com" <suravee.suthikulpanit@....com>
Subject: Re: [PATCH v3 6/6] iommu/amd: Sync once for scatter-gather operations
> On Jun 15, 2021, at 4:25 AM, Robin Murphy <robin.murphy@....com> wrote:
>
> On 2021-06-07 19:25, Nadav Amit wrote:
>> From: Nadav Amit <namit@...are.com>
>> On virtual machines, software must flush the IOTLB after each page table
>> entry update.
>> The iommu_map_sg() code iterates through the given scatter-gather list
>> and invokes iommu_map() for each element in the scatter-gather list,
>> which calls into the vendor IOMMU driver through iommu_ops callback. As
>> the result, a single sg mapping may lead to multiple IOTLB flushes.
>> Fix this by adding amd_iotlb_sync_map() callback and flushing at this
>> point after all sg mappings we set.
>> This commit is followed and inspired by commit 933fcd01e97e2
>> ("iommu/vt-d: Add iotlb_sync_map callback").
>> Cc: Joerg Roedel <joro@...tes.org>
>> Cc: Will Deacon <will@...nel.org>
>> Cc: Jiajun Cao <caojiajun@...are.com>
>> Cc: Robin Murphy <robin.murphy@....com>
>> Cc: Lu Baolu <baolu.lu@...ux.intel.com>
>> Cc: iommu@...ts.linux-foundation.org
>> Cc: linux-kernel@...r.kernel.org
>> Signed-off-by: Nadav Amit <namit@...are.com>
>> ---
>> drivers/iommu/amd/iommu.c | 15 ++++++++++++---
>> 1 file changed, 12 insertions(+), 3 deletions(-)
>> diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
>> index 128f2e889ced..dd23566f1db8 100644
>> --- a/drivers/iommu/amd/iommu.c
>> +++ b/drivers/iommu/amd/iommu.c
>> @@ -2027,6 +2027,16 @@ static int amd_iommu_attach_device(struct iommu_domain *dom,
>> return ret;
>> }
>> +static void amd_iommu_iotlb_sync_map(struct iommu_domain *dom,
>> + unsigned long iova, size_t size)
>> +{
>> + struct protection_domain *domain = to_pdomain(dom);
>> + struct io_pgtable_ops *ops = &domain->iop.iop.ops;
>> +
>> + if (ops->map)
>
> Not too critical since you're only moving existing code around, but is ops->map ever not set? Either way the check ends up looking rather out-of-place here :/
>
> It's not very clear what the original intent was - I do wonder whether it's supposed to be related to PAGE_MODE_NONE, but given that amd_iommu_map() has an explicit check and errors out early in that case, we'd never get here anyway. Possibly something to come back and clean up later?
[ +Suravee ]
According to what I see in the git log, the checks for ops->map (as well as ops->unmap) were relatively recently introduced by Suravee [1] in preparation to AMD IOMMU v2 page tables [2]. Since I do not know what he plans, I prefer not to touch this code.
[1] https://lore.kernel.org/linux-iommu/20200923101442.73157-13-suravee.suthikulpanit@amd.com/
[2] https://lore.kernel.org/linux-iommu/20200923101442.73157-1-suravee.suthikulpanit@amd.com/
Powered by blists - more mailing lists