[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1495094397-9132-6-git-send-email-thunder.leizhen@huawei.com>
Date: Thu, 18 May 2017 15:59:56 +0800
From: Zhen Lei <thunder.leizhen@...wei.com>
To: Joerg Roedel <joro@...tes.org>,
iommu <iommu@...ts.linux-foundation.org>,
Robin Murphy <robin.murphy@....com>,
David Woodhouse <dwmw2@...radead.org>,
Sudeep Dutt <sudeep.dutt@...el.com>,
Ashutosh Dixit <ashutosh.dixit@...el.com>,
linux-kernel <linux-kernel@...r.kernel.org>
CC: Zefan Li <lizefan@...wei.com>, Xinwei Hu <huxinwei@...wei.com>,
"Tianhong Ding" <dingtianhong@...wei.com>,
Hanjun Guo <guohanjun@...wei.com>,
Zhen Lei <thunder.leizhen@...wei.com>
Subject: [PATCH v3 5/6] iommu/iova: move the caculation of pad mask out of loop
I'm not sure whether the compiler can optimize it, but move it out will
be better. At least, it does not require lock protection.
Signed-off-by: Zhen Lei <thunder.leizhen@...wei.com>
---
drivers/iommu/iova.c | 22 ++++++++++------------
1 file changed, 10 insertions(+), 12 deletions(-)
diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
index 711b10a..338930b 100644
--- a/drivers/iommu/iova.c
+++ b/drivers/iommu/iova.c
@@ -155,23 +155,16 @@ iova_insert_rbtree(struct rb_root *root, struct iova *iova,
rb_insert_color(&iova->node, root);
}
-/*
- * Computes the padding size required, to make the start address
- * naturally aligned on the power-of-two order of its size
- */
-static unsigned int
-iova_get_pad_size(unsigned int size, unsigned int limit_pfn)
-{
- return (limit_pfn + 1 - size) & (__roundup_pow_of_two(size) - 1);
-}
-
static int __alloc_and_insert_iova_range(struct iova_domain *iovad,
unsigned long size, unsigned long limit_pfn,
struct iova *new, bool size_aligned)
{
struct rb_node *prev, *curr;
unsigned long flags;
- unsigned int pad_size = 0;
+ unsigned long pad_mask, pad_size = 0;
+
+ if (size_aligned)
+ pad_mask = __roundup_pow_of_two(size) - 1;
/* Walk the tree backwards */
spin_lock_irqsave(&iovad->iova_rbtree_lock, flags);
@@ -185,8 +178,13 @@ static int __alloc_and_insert_iova_range(struct iova_domain *iovad,
else if (limit_pfn < curr_iova->pfn_hi)
goto adjust_limit_pfn;
else {
+ /*
+ * Computes the padding size required, to make the start
+ * address naturally aligned on the power-of-two order
+ * of its size
+ */
if (size_aligned)
- pad_size = iova_get_pad_size(size, limit_pfn);
+ pad_size = (limit_pfn + 1 - size) & pad_mask;
if ((curr_iova->pfn_hi + size + pad_size) <= limit_pfn)
break; /* found a free slot */
}
--
2.5.0
Powered by blists - more mailing lists