[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190212021215.13247-4-richardw.yang@linux.intel.com>
Date: Tue, 12 Feb 2019 10:12:12 +0800
From: Wei Yang <richardw.yang@...ux.intel.com>
To: x86@...nel.org, linux-kernel@...r.kernel.org
Cc: dave.hansen@...ux.intel.com, luto@...nel.org, peterz@...radead.or,
tglx@...utronix.de, Wei Yang <richardw.yang@...ux.intel.com>
Subject: [PATCH 3/6] x86, mm: add comment for split_mem_range to help understanding
Describing the possible ranges in split and marking ranges with name to
help audience understand the logic.
Also this prepares to illustrate a code refine next.
Signed-off-by: Wei Yang <richardw.yang@...ux.intel.com>
---
arch/x86/mm/init.c | 51 ++++++++++++++++++++++++++++++++++++++++------
1 file changed, 45 insertions(+), 6 deletions(-)
diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index 6fb84be79c7c..2b782dcd6d71 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -328,6 +328,31 @@ static const char *page_size_string(struct map_range *mr)
return str_4k;
}
+/*
+ * There are 3 types of ranges:
+ *
+ * k : 4K size
+ * m : 2M size
+ * G : 1G size
+ *
+ * 1G size is only valid when CONFIG_X86_64 is set.
+ *
+ * So we can describe the possible ranges like below:
+ *
+ * kkkmmmGGGmmmkkk
+ * (A)(B)(C)(D)(E)
+ *
+ * This means there are at most:
+ *
+ * 3 ranges when CONFIG_X86_32 is set
+ * 5 ranges when CONFIG_X86_64 is set
+ *
+ * which corresponds to the definition of NR_RANGE_MR.
+ *
+ * split_mem_range() does the split from low to high. By naming these ranges
+ * to A, B, C, D, E respectively and marking the name in following comment, it
+ * may help you to understand how ranges are split.
+ */
static int __meminit split_mem_range(struct map_range *mr,
unsigned long start,
unsigned long end)
@@ -338,7 +363,10 @@ static int __meminit split_mem_range(struct map_range *mr,
limit_pfn = PFN_DOWN(end);
- /* head if not big page alignment ? */
+ /*
+ * Range (A):
+ * head if not big page alignment ?
+ */
pfn = start_pfn = PFN_DOWN(start);
#ifdef CONFIG_X86_32
/*
@@ -361,7 +389,10 @@ static int __meminit split_mem_range(struct map_range *mr,
pfn = end_pfn;
}
- /* big page (2M) range */
+ /*
+ * Range (B):
+ * big page (2M) range
+ */
start_pfn = round_up(pfn, PFN_DOWN(PMD_SIZE));
#ifdef CONFIG_X86_32
end_pfn = round_down(limit_pfn, PFN_DOWN(PMD_SIZE));
@@ -370,7 +401,6 @@ static int __meminit split_mem_range(struct map_range *mr,
if (end_pfn > round_down(limit_pfn, PFN_DOWN(PMD_SIZE)))
end_pfn = round_down(limit_pfn, PFN_DOWN(PMD_SIZE));
#endif
-
if (start_pfn < end_pfn) {
nr_range = save_mr(mr, nr_range, start_pfn, end_pfn,
page_size_mask & (1<<PG_LEVEL_2M));
@@ -378,7 +408,10 @@ static int __meminit split_mem_range(struct map_range *mr,
}
#ifdef CONFIG_X86_64
- /* big page (1G) range */
+ /*
+ * Range (C):
+ * big page (1G) range
+ */
start_pfn = round_up(pfn, PFN_DOWN(PUD_SIZE));
end_pfn = round_down(limit_pfn, PFN_DOWN(PUD_SIZE));
if (start_pfn < end_pfn) {
@@ -388,7 +421,10 @@ static int __meminit split_mem_range(struct map_range *mr,
pfn = end_pfn;
}
- /* tail is not big page (1G) alignment */
+ /*
+ * Range (D):
+ * big page (2M) range
+ */
start_pfn = round_up(pfn, PFN_DOWN(PMD_SIZE));
end_pfn = round_down(limit_pfn, PFN_DOWN(PMD_SIZE));
if (start_pfn < end_pfn) {
@@ -398,7 +434,10 @@ static int __meminit split_mem_range(struct map_range *mr,
}
#endif
- /* tail is not big page (2M) alignment */
+ /*
+ * Range (E):
+ * tail is not big page (2M) alignment
+ */
start_pfn = pfn;
end_pfn = limit_pfn;
nr_range = save_mr(mr, nr_range, start_pfn, end_pfn, 0);
--
2.19.1
Powered by blists - more mailing lists