[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201210073425.25960-5-zhukeqian1@huawei.com>
Date: Thu, 10 Dec 2020 15:34:22 +0800
From: Keqian Zhu <zhukeqian1@...wei.com>
To: <linux-kernel@...r.kernel.org>,
<linux-arm-kernel@...ts.infradead.org>,
<iommu@...ts.linux-foundation.org>, <kvm@...r.kernel.org>,
<kvmarm@...ts.cs.columbia.edu>,
Alex Williamson <alex.williamson@...hat.com>,
Cornelia Huck <cohuck@...hat.com>,
Marc Zyngier <maz@...nel.org>, Will Deacon <will@...nel.org>,
Robin Murphy <robin.murphy@....com>
CC: Joerg Roedel <joro@...tes.org>,
Catalin Marinas <catalin.marinas@....com>,
James Morse <james.morse@....com>,
Suzuki K Poulose <suzuki.poulose@....com>,
Sean Christopherson <sean.j.christopherson@...el.com>,
Julien Thierry <julien.thierry.kdev@...il.com>,
Mark Brown <broonie@...nel.org>,
"Thomas Gleixner" <tglx@...utronix.de>,
Andrew Morton <akpm@...ux-foundation.org>,
Alexios Zavras <alexios.zavras@...el.com>,
<wanghaibin.wang@...wei.com>, <jiangkunkun@...wei.com>,
Keqian Zhu <zhukeqian1@...wei.com>
Subject: [PATCH 4/7] vfio: iommu_type1: Fix missing dirty page when promote pinned_scope
When we pin or detach a group which is not dirty tracking capable,
we will try to promote pinned_scope of vfio_iommu.
If we succeed to do so, vfio only report pinned_scope as dirty to
userspace next time, but these memory written before pin or detach
is missed.
The solution is that we must populate all dma range as dirty before
promoting pinned_scope of vfio_iommu.
Signed-off-by: Keqian Zhu <zhukeqian1@...wei.com>
---
drivers/vfio/vfio_iommu_type1.c | 18 ++++++++++++++++++
1 file changed, 18 insertions(+)
diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
index bd9a94590ebc..00684597b098 100644
--- a/drivers/vfio/vfio_iommu_type1.c
+++ b/drivers/vfio/vfio_iommu_type1.c
@@ -1633,6 +1633,20 @@ static struct vfio_group *vfio_iommu_find_iommu_group(struct vfio_iommu *iommu,
return group;
}
+static void vfio_populate_bitmap_all(struct vfio_iommu *iommu)
+{
+ struct rb_node *n;
+ unsigned long pgshift = __ffs(iommu->pgsize_bitmap);
+
+ for (n = rb_first(&iommu->dma_list); n; n = rb_next(n)) {
+ struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node);
+ unsigned long nbits = dma->size >> pgshift;
+
+ if (dma->iommu_mapped)
+ bitmap_set(dma->bitmap, 0, nbits);
+ }
+}
+
static void promote_pinned_page_dirty_scope(struct vfio_iommu *iommu)
{
struct vfio_domain *domain;
@@ -1657,6 +1671,10 @@ static void promote_pinned_page_dirty_scope(struct vfio_iommu *iommu)
}
iommu->pinned_page_dirty_scope = true;
+
+ /* Set all bitmap to avoid missing dirty page */
+ if (iommu->dirty_page_tracking)
+ vfio_populate_bitmap_all(iommu);
}
static bool vfio_iommu_has_sw_msi(struct list_head *group_resv_regions,
--
2.23.0
Powered by blists - more mailing lists