lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Message-Id: <1442715296-2649-1-git-send-email-weiyang@linux.vnet.ibm.com>
Date:	Sun, 20 Sep 2015 10:14:56 +0800
From:	Wei Yang <weiyang@...ux.vnet.ibm.com>
To:	akinobu.mita@...il.com, davem@...emloft.net,
	benh@...nel.crashing.org, paulus@...ba.org
Cc:	linux-kernel@...r.kernel.org, Wei Yang <weiyang@...ux.vnet.ibm.com>
Subject: [RFC PATCH] iommu: enable the last bit in iommu_area_alloc()

In 'commit a66022c45775 ("iommu-helper: use bitmap library")',
iommu_area_alloc() uses bitmap_find_next_zero_area() to lookup available
iommu space.

When given "start, size, nr", bitmap_find_next_zero_area() is looking for a
range with nr zero bit in [start, size) instead of [start, size]. This
means the last bit is already excluded. By decrease size at the beginning,
the last iommu page will not be allocated.

This patch removes the decrease on size.

Signed-off-by: Wei Yang <weiyang@...ux.vnet.ibm.com>

---

I may missed something, while the code makes me a little confused.

I found two users of iommu_area_alloc(), one in powernv platform and one in
lib/iommu-common.c. The "limit", which is passed to
bitmap_find_next_zero_area() as the "size" are both set to pool->end in these
two cases. While the pool->end in these two cases are calculated differently.

On powernv platform, iommu_init_table() sets 
	p->end = p->start + tbl->poolsize;
While in iommu_tbl_pool_init(), 
        pools[i].end = pools[i].start + iommu->poolsize -1;

In both case, it will not do harm to system, except we have one more less
iommu page in the second case. 

And then in current code, iommu_area_alloc() will decrease the size by one
again.

I may missed something, currently I think the implementation in powernv
platform is correct while iommu_area_alloc() eats the last iommu page.

Tests:

I have apply this on top of v4.2 and enforce to use dma_iommu_ops for each
device. Transfered on guest image with 4GB by scp and check the checksum are
the same.

---
 lib/iommu-helper.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/lib/iommu-helper.c b/lib/iommu-helper.c
index c27e269..2866004 100644
--- a/lib/iommu-helper.c
+++ b/lib/iommu-helper.c
@@ -23,8 +23,6 @@ unsigned long iommu_area_alloc(unsigned long *map, unsigned long size,
 {
 	unsigned long index;
 
-	/* We don't want the last of the limit */
-	size -= 1;
 again:
 	index = bitmap_find_next_zero_area(map, size, start, nr, align_mask);
 	if (index < size) {
-- 
2.5.0

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ