[<prev] [next>] [day] [month] [year] [list]
Message-Id: <20170310043713.96871-1-richard.weiyang@gmail.com>
Date: Fri, 10 Mar 2017 12:37:13 +0800
From: Wei Yang <richard.weiyang@...il.com>
To: akpm@...ux-foundation.org, tj@...nel.org
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Wei Yang <richard.weiyang@...il.com>
Subject: [PATCH] mm/sparse: refine usemap_size() a little
Current implementation calculates usemap_size in two steps:
* calculate number of bytes to cover these bits
* calculate number of "unsigned long" to cover these bytes
It would be more clear by:
* calculate number of "unsigned long" to cover these bits
* multiple it with sizeof(unsigned long)
This patch refine usemap_size() a little to make it more easy to
understand.
Signed-off-by: Wei Yang <richard.weiyang@...il.com>
---
mm/sparse.c | 5 +----
1 file changed, 1 insertion(+), 4 deletions(-)
diff --git a/mm/sparse.c b/mm/sparse.c
index a0792526adfa..faa36ef9f9bd 100644
--- a/mm/sparse.c
+++ b/mm/sparse.c
@@ -249,10 +249,7 @@ static int __meminit sparse_init_one_section(struct mem_section *ms,
unsigned long usemap_size(void)
{
- unsigned long size_bytes;
- size_bytes = roundup(SECTION_BLOCKFLAGS_BITS, 8) / 8;
- size_bytes = roundup(size_bytes, sizeof(unsigned long));
- return size_bytes;
+ return BITS_TO_LONGS(SECTION_BLOCKFLAGS_BITS) * sizeof(unsigned long);
}
#ifdef CONFIG_MEMORY_HOTPLUG
--
2.11.0
Powered by blists - more mailing lists