[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <325046c8-cfc3-c42a-0b39-bafc1acae800@linux.intel.com>
Date: Wed, 31 May 2023 12:02:08 +0800
From: Baolu Lu <baolu.lu@...ux.intel.com>
To: Jason Gunthorpe <jgg@...dia.com>
Cc: baolu.lu@...ux.intel.com,
Jacob Pan <jacob.jun.pan@...ux.intel.com>,
LKML <linux-kernel@...r.kernel.org>, iommu@...ts.linux.dev,
Joerg Roedel <joro@...tes.org>, dmaengine@...r.kernel.org,
vkoul@...nel.org, Robin Murphy <robin.murphy@....com>,
Will Deacon <will@...nel.org>,
David Woodhouse <dwmw2@...radead.org>,
Raj Ashok <ashok.raj@...el.com>,
"Tian, Kevin" <kevin.tian@...el.com>, Yi Liu <yi.l.liu@...el.com>,
"Yu, Fenghua" <fenghua.yu@...el.com>,
Dave Jiang <dave.jiang@...el.com>,
Tony Luck <tony.luck@...el.com>,
"Zanussi, Tom" <tom.zanussi@...el.com>,
narayan.ranganathan@...el.com
Subject: Re: [PATCH v6 3/4] iommu/vt-d: Add set_dev_pasid callback for dma
domain
On 5/31/23 12:55 AM, Jason Gunthorpe wrote:
> On Tue, May 30, 2023 at 10:19:05AM +0800, Baolu Lu wrote:
>> On 5/30/23 3:48 AM, Jason Gunthorpe wrote:
>>> On Fri, May 19, 2023 at 01:32:22PM -0700, Jacob Pan wrote:
>>>
>>>> @@ -4720,25 +4762,99 @@ static void intel_iommu_iotlb_sync_map(struct iommu_domain *domain,
>>>> static void intel_iommu_remove_dev_pasid(struct device *dev, ioasid_t pasid)
>>>> {
>>>> struct intel_iommu *iommu = device_to_iommu(dev, NULL, NULL);
>>>> + struct dev_pasid_info *curr, *dev_pasid = NULL;
>>>> + struct dmar_domain *dmar_domain;
>>>> struct iommu_domain *domain;
>>>> + unsigned long flags;
>>>> - /* Domain type specific cleanup: */
>>>> domain = iommu_get_domain_for_dev_pasid(dev, pasid, 0);
>>>> - if (domain) {
>>>> - switch (domain->type) {
>>>> - case IOMMU_DOMAIN_SVA:
>>>> - intel_svm_remove_dev_pasid(dev, pasid);
>>>> - break;
>>>> - default:
>>>> - /* should never reach here */
>>>> - WARN_ON(1);
>>>> + if (!domain)
>>>> + goto out_tear_down;
>>>> +
>>>> + /*
>>>> + * The SVA implementation needs to stop mm notification, drain the
>>>> + * pending page fault requests before tearing down the pasid entry.
>>>> + * The VT-d spec (section 6.2.3.1) also recommends that software
>>>> + * could use a reserved domain id for all first-only and pass-through
>>>> + * translations. Hence there's no need to call domain_detach_iommu()
>>>> + * in the sva domain case.
>>>> + */
>>>> + if (domain->type == IOMMU_DOMAIN_SVA) {
>>>> + intel_svm_remove_dev_pasid(dev, pasid);
>>>> + goto out_tear_down;
>>>> + }
>>>
>>> But why don't you need to do all the other
>>> intel_pasid_tear_down_entry(), intel_svm_drain_prq() (which is
>>> misnamed) and other stuff from intel_svm_remove_dev_pasid() ?
>>
>> Perhaps,
>>
>> if (domain->type == IOMMU_DOMAIN_SVA) {
>> intel_svm_remove_dev_pasid(dev, pasid);
>> return;
>> }
>>
>> ?
>
> I would expect only stuff directly connected to SVM be in the SVM
> function.
>
> De-initalizing PRI and any other pasid destruction should be in this
> function.
>
>>> There still seems to be waaay too much "SVM" in the PASID code.
>>
>> This segment of code is destined to be temporary. From a long-term
>> perspective, I hope to move SVA specific staffs such as mm notification,
>> prq draining, etc. to the iommu core. They are generic rather than Intel
>> iommu specific.
>
> Yes, sort of, but.. That is just the mmu notifier bits
>
> All the PRI/PASID teardown needs to be unlinked from SVM
Get your point now. Yes. PRI and PASID teardown are not SVA-specific.
Sorry that we should rename SVM to SVA to unify the Linux terminology.
>
>>> It would be nice if the different domain types had their own ops..
>>
>> Good suggestion!
>>
>> We can add a domain ops in the Intel domain structure which is
>> responsible for how to install an Intel iommu domain onto the VT-d
>> hardware.
>
> We should have seperate iommu_domain_ops at least, I think that would
> cover alot of it?
Are you suggesting adding this ops in common iommu_domain or intel's
dmar_domain? My understanding is the latter. To do so, probably we need
to define various callbacks for different type of domains: identity,
blocking, dma remapping, sva and possibly nested. Also need to care
about legacy vs. scalable mode.
That's the reason why I hoped to do all these in separated series with
carefully reviewing and testing.
Best regards,
baolu
Powered by blists - more mailing lists