[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-id: <000301cfe430$504b0290$f0e107b0$%yang@samsung.com>
Date: Fri, 10 Oct 2014 10:15:53 +0800
From: Weijie Yang <weijie.yang@...sung.com>
To: iamjoonsoo.kim@....com
Cc: mina86@...a86.com, aneesh.kumar@...ux.vnet.ibm.com,
m.szyprowski@...sung.com,
'Andrew Morton' <akpm@...ux-foundation.org>,
'linux-kernel' <linux-kernel@...r.kernel.org>,
'Linux-MM' <linux-mm@...ck.org>
Subject: [PATCH] mm/cma: fix cma bitmap aligned mask computing
The current cma bitmap aligned mask compute way is incorrect, it could
cause an unexpected align when using cma_alloc() if wanted align order
is bigger than cma->order_per_bit.
Take kvm for example (PAGE_SHIFT = 12), kvm_cma->order_per_bit is set to 6,
when kvm_alloc_rma() tries to alloc kvm_rma_pages, it will input 15 as
expected align value, after using current computing, however, we get 0 as
cma bitmap aligned mask other than 511.
This patch fixes the cma bitmap aligned mask compute way.
Signed-off-by: Weijie Yang <weijie.yang@...sung.com>
---
mm/cma.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/mm/cma.c b/mm/cma.c
index c17751c..f6207ef 100644
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -57,7 +57,10 @@ unsigned long cma_get_size(struct cma *cma)
static unsigned long cma_bitmap_aligned_mask(struct cma *cma, int align_order)
{
- return (1UL << (align_order >> cma->order_per_bit)) - 1;
+ if (align_order <= cma->order_per_bit)
+ return 0;
+ else
+ return (1UL << (align_order - cma->order_per_bit)) - 1;
}
static unsigned long cma_bitmap_maxno(struct cma *cma)
--
1.7.10.4
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists