[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20180129123840.638706274@linuxfoundation.org>
Date: Mon, 29 Jan 2018 13:56:39 +0100
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, Angus Clark <angus@...usclark.org>,
Doug Berger <opendmb@...il.com>,
Gregory Fong <gregory.0xf0@...il.com>,
Laura Abbott <labbott@...hat.com>,
Vlastimil Babka <vbabka@...e.cz>,
Lucas Stach <l.stach@...gutronix.de>,
Catalin Marinas <catalin.marinas@....com>,
Shiraz Hashim <shashim@...eaurora.org>,
Jaewon Kim <jaewon31.kim@...sung.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Linus Torvalds <torvalds@...ux-foundation.org>
Subject: [PATCH 4.9 15/66] cma: fix calculation of aligned offset
4.9-stable review patch. If anyone has any objections, please let me know.
------------------
From: Doug Berger <opendmb@...il.com>
commit e048cb32f69038aa1c8f11e5c1b331be4181659d upstream.
The align_offset parameter is used by bitmap_find_next_zero_area_off()
to represent the offset of map's base from the previous alignment
boundary; the function ensures that the returned index, plus the
align_offset, honors the specified align_mask.
The logic introduced by commit b5be83e308f7 ("mm: cma: align to physical
address, not CMA region position") has the cma driver calculate the
offset to the *next* alignment boundary. In most cases, the base
alignment is greater than that specified when making allocations,
resulting in a zero offset whether we align up or down. In the example
given with the commit, the base alignment (8MB) was half the requested
alignment (16MB) so the math also happened to work since the offset is
8MB in both directions. However, when requesting allocations with an
alignment greater than twice that of the base, the returned index would
not be correctly aligned.
Also, the align_order arguments of cma_bitmap_aligned_mask() and
cma_bitmap_aligned_offset() should not be negative so the argument type
was made unsigned.
Fixes: b5be83e308f7 ("mm: cma: align to physical address, not CMA region position")
Link: http://lkml.kernel.org/r/20170628170742.2895-1-opendmb@gmail.com
Signed-off-by: Angus Clark <angus@...usclark.org>
Signed-off-by: Doug Berger <opendmb@...il.com>
Acked-by: Gregory Fong <gregory.0xf0@...il.com>
Cc: Doug Berger <opendmb@...il.com>
Cc: Angus Clark <angus@...usclark.org>
Cc: Laura Abbott <labbott@...hat.com>
Cc: Vlastimil Babka <vbabka@...e.cz>
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
Cc: Lucas Stach <l.stach@...gutronix.de>
Cc: Catalin Marinas <catalin.marinas@....com>
Cc: Shiraz Hashim <shashim@...eaurora.org>
Cc: Jaewon Kim <jaewon31.kim@...sung.com>
Signed-off-by: Andrew Morton <akpm@...ux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@...ux-foundation.org>
Signed-off-by: Vlastimil Babka <vbabka@...e.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
mm/cma.c | 15 ++++++---------
1 file changed, 6 insertions(+), 9 deletions(-)
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -54,7 +54,7 @@ unsigned long cma_get_size(const struct
}
static unsigned long cma_bitmap_aligned_mask(const struct cma *cma,
- int align_order)
+ unsigned int align_order)
{
if (align_order <= cma->order_per_bit)
return 0;
@@ -62,17 +62,14 @@ static unsigned long cma_bitmap_aligned_
}
/*
- * Find a PFN aligned to the specified order and return an offset represented in
- * order_per_bits.
+ * Find the offset of the base PFN from the specified align_order.
+ * The value returned is represented in order_per_bits.
*/
static unsigned long cma_bitmap_aligned_offset(const struct cma *cma,
- int align_order)
+ unsigned int align_order)
{
- if (align_order <= cma->order_per_bit)
- return 0;
-
- return (ALIGN(cma->base_pfn, (1UL << align_order))
- - cma->base_pfn) >> cma->order_per_bit;
+ return (cma->base_pfn & ((1UL << align_order) - 1))
+ >> cma->order_per_bit;
}
static unsigned long cma_bitmap_pages_to_bits(const struct cma *cma,
Powered by blists - more mailing lists