[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e42373e3-10d5-5a34-8f33-8bb82d64fb19@huawei.com>
Date: Thu, 15 Apr 2021 14:18:09 +0800
From: Keqian Zhu <zhukeqian1@...wei.com>
To: Lu Baolu <baolu.lu@...ux.intel.com>,
<linux-kernel@...r.kernel.org>,
<linux-arm-kernel@...ts.infradead.org>,
<iommu@...ts.linux-foundation.org>,
Robin Murphy <robin.murphy@....com>,
Will Deacon <will@...nel.org>,
"Joerg Roedel" <joro@...tes.org>,
Yi Sun <yi.y.sun@...ux.intel.com>,
"Jean-Philippe Brucker" <jean-philippe@...aro.org>,
Jonathan Cameron <Jonathan.Cameron@...wei.com>,
Tian Kevin <kevin.tian@...el.com>
CC: Alex Williamson <alex.williamson@...hat.com>,
Cornelia Huck <cohuck@...hat.com>,
Kirti Wankhede <kwankhede@...dia.com>,
<wanghaibin.wang@...wei.com>, <jiangkunkun@...wei.com>,
<yuzenghui@...wei.com>, <lushenming@...wei.com>
Subject: Re: [PATCH v3 01/12] iommu: Introduce dirty log tracking framework
Hi Baolu,
Thanks for the review!
On 2021/4/14 15:00, Lu Baolu wrote:
> Hi Keqian,
>
> On 4/13/21 4:54 PM, Keqian Zhu wrote:
>> Some types of IOMMU are capable of tracking DMA dirty log, such as
>> ARM SMMU with HTTU or Intel IOMMU with SLADE. This introduces the
>> dirty log tracking framework in the IOMMU base layer.
>>
>> Three new essential interfaces are added, and we maintaince the status
>> of dirty log tracking in iommu_domain.
>> 1. iommu_switch_dirty_log: Perform actions to start|stop dirty log tracking
>> 2. iommu_sync_dirty_log: Sync dirty log from IOMMU into a dirty bitmap
>> 3. iommu_clear_dirty_log: Clear dirty log of IOMMU by a mask bitmap
>>
>> A new dev feature are added to indicate whether a specific type of
>> iommu hardware supports and its driver realizes them.
>>
>> Signed-off-by: Keqian Zhu <zhukeqian1@...wei.com>
>> Signed-off-by: Kunkun Jiang <jiangkunkun@...wei.com>
>> ---
>> drivers/iommu/iommu.c | 150 ++++++++++++++++++++++++++++++++++++++++++
>> include/linux/iommu.h | 53 +++++++++++++++
>> 2 files changed, 203 insertions(+)
>>
>> diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
>> index d0b0a15dba84..667b2d6d2fc0 100644
>> --- a/drivers/iommu/iommu.c
>> +++ b/drivers/iommu/iommu.c
>> @@ -1922,6 +1922,7 @@ static struct iommu_domain *__iommu_domain_alloc(struct bus_type *bus,
>> domain->type = type;
>> /* Assume all sizes by default; the driver may override this later */
>> domain->pgsize_bitmap = bus->iommu_ops->pgsize_bitmap;
>> + mutex_init(&domain->switch_log_lock);
>> return domain;
>> }
>> @@ -2720,6 +2721,155 @@ int iommu_domain_set_attr(struct iommu_domain *domain,
>> }
>> EXPORT_SYMBOL_GPL(iommu_domain_set_attr);
>> +int iommu_switch_dirty_log(struct iommu_domain *domain, bool enable,
>> + unsigned long iova, size_t size, int prot)
>> +{
>> + const struct iommu_ops *ops = domain->ops;
>> + int ret;
>> +
>> + if (unlikely(!ops || !ops->switch_dirty_log))
>> + return -ENODEV;
>> +
>> + mutex_lock(&domain->switch_log_lock);
>> + if (enable && domain->dirty_log_tracking) {
>> + ret = -EBUSY;
>> + goto out;
>> + } else if (!enable && !domain->dirty_log_tracking) {
>> + ret = -EINVAL;
>> + goto out;
>> + }
>> +
>> + ret = ops->switch_dirty_log(domain, enable, iova, size, prot);
>> + if (ret)
>> + goto out;
>> +
>> + domain->dirty_log_tracking = enable;
>> +out:
>> + mutex_unlock(&domain->switch_log_lock);
>> + return ret;
>> +}
>> +EXPORT_SYMBOL_GPL(iommu_switch_dirty_log);
>
> Since you also added IOMMU_DEV_FEAT_HWDBM, I am wondering what's the
> difference between
>
> iommu_switch_dirty_log(on) vs. iommu_dev_enable_feature(IOMMU_DEV_FEAT_HWDBM)
>
> iommu_switch_dirty_log(off) vs. iommu_dev_disable_feature(IOMMU_DEV_FEAT_HWDBM)
Indeed. As I can see, IOMMU_DEV_FEAT_AUX is not switchable, so enable/disable
are not applicable for it. IOMMU_DEV_FEAT_SVA is switchable, so we can use these
interfaces for it.
IOMMU_DEV_FEAT_HWDBM is used to indicate whether hardware supports HWDBM, so we should
design it as not switchable. I will modify the commit message of patch#12, thanks!
>
>> +
>> +int iommu_sync_dirty_log(struct iommu_domain *domain, unsigned long iova,
>> + size_t size, unsigned long *bitmap,
>> + unsigned long base_iova, unsigned long bitmap_pgshift)
>> +{
>> + const struct iommu_ops *ops = domain->ops;
>> + unsigned int min_pagesz;
>> + size_t pgsize;
>> + int ret = 0;
>> +
>> + if (unlikely(!ops || !ops->sync_dirty_log))
>> + return -ENODEV;
>> +
>> + min_pagesz = 1 << __ffs(domain->pgsize_bitmap);
>> + if (!IS_ALIGNED(iova | size, min_pagesz)) {
>> + pr_err("unaligned: iova 0x%lx size 0x%zx min_pagesz 0x%x\n",
>> + iova, size, min_pagesz);
>> + return -EINVAL;
>> + }
>> +
>> + mutex_lock(&domain->switch_log_lock);
>> + if (!domain->dirty_log_tracking) {
>> + ret = -EINVAL;
>> + goto out;
>> + }
>> +
>> + while (size) {
>> + pgsize = iommu_pgsize(domain, iova, size);
>> +
>> + ret = ops->sync_dirty_log(domain, iova, pgsize,
>> + bitmap, base_iova, bitmap_pgshift);
>
> Any reason why do you want to do this in a per-4K page manner? This can
> lead to a lot of indirect calls and bad performance.
>
> How about a sync_dirty_pages()?
The function name of iommu_pgsize() is a bit puzzling. Actually it will try to
compute the max size that fit into size, so the pgsize can be a large page size
even if the underlying mapping is 4K. The __iommu_unmap() also has a similar logic.
BRs,
Keqian
Powered by blists - more mailing lists