[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1461084994-2355-8-git-send-email-eric.auger@linaro.org>
Date: Tue, 19 Apr 2016 16:56:31 +0000
From: Eric Auger <eric.auger@...aro.org>
To: eric.auger@...com, eric.auger@...aro.org, robin.murphy@....com,
alex.williamson@...hat.com, will.deacon@....com, joro@...tes.org,
tglx@...utronix.de, jason@...edaemon.net, marc.zyngier@....com,
christoffer.dall@...aro.org, linux-arm-kernel@...ts.infradead.org
Cc: patches@...aro.org, linux-kernel@...r.kernel.org,
Bharat.Bhushan@...escale.com, pranav.sawargaonkar@...il.com,
p.fedin@...sung.com, iommu@...ts.linux-foundation.org,
Jean-Philippe.Brucker@....com, julien.grall@....com
Subject: [PATCH v7 07/10] iommu/dma-reserved-iommu: delete bindings in iommu_free_reserved_iova_domain
Now reserved bindings can exist, destroy them when destroying
the reserved iova domain. iommu_map is not supposed to be atomic,
hence the extra complexity in the locking.
Signed-off-by: Eric Auger <eric.auger@...aro.org>
---
v6 -> v7:
- remove [PATCH v6 7/7] dma-reserved-iommu: iommu_unmap_reserved and
destroy the bindings in iommu_free_reserved_iova_domain
v5 -> v6:
- use spin_lock instead of mutex
v3 -> v4:
- previously "iommu/arm-smmu: relinquish reserved resources on
domain deletion"
---
drivers/iommu/dma-reserved-iommu.c | 34 ++++++++++++++++++++++++++++------
1 file changed, 28 insertions(+), 6 deletions(-)
diff --git a/drivers/iommu/dma-reserved-iommu.c b/drivers/iommu/dma-reserved-iommu.c
index 426d339..2522235 100644
--- a/drivers/iommu/dma-reserved-iommu.c
+++ b/drivers/iommu/dma-reserved-iommu.c
@@ -157,14 +157,36 @@ void iommu_free_reserved_iova_domain(struct iommu_domain *domain)
unsigned long flags;
int ret = 0;
- spin_lock_irqsave(&domain->reserved_lock, flags);
-
- rid = (struct reserved_iova_domain *)domain->reserved_iova_cookie;
- if (!rid) {
- ret = -EINVAL;
- goto unlock;
+ while (1) {
+ struct iommu_reserved_binding *b;
+ struct rb_node *node;
+ dma_addr_t iova;
+ size_t size;
+
+ spin_lock_irqsave(&domain->reserved_lock, flags);
+
+ rid = (struct reserved_iova_domain *)
+ domain->reserved_iova_cookie;
+ if (!rid) {
+ ret = -EINVAL;
+ goto unlock;
+ }
+
+ node = rb_first(&domain->reserved_binding_list);
+ if (!node)
+ break;
+ b = rb_entry(node, struct iommu_reserved_binding, node);
+
+ iova = b->iova;
+ size = b->size;
+
+ while (!kref_put(&b->kref, reserved_binding_release))
+ ;
+ spin_unlock_irqrestore(&domain->reserved_lock, flags);
+ iommu_unmap(domain, iova, size);
}
+ domain->reserved_binding_list = RB_ROOT;
domain->reserved_iova_cookie = NULL;
unlock:
spin_unlock_irqrestore(&domain->reserved_lock, flags);
--
1.9.1
Powered by blists - more mailing lists