[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210602111055.10480-2-mgurtovoy@nvidia.com>
Date: Wed, 2 Jun 2021 14:10:53 +0300
From: Max Gurtovoy <mgurtovoy@...dia.com>
To: <linux-nvme@...ts.infradead.org>, <dan.j.williams@...el.com>,
<logang@...tatee.com>, <linux-mm@...ck.org>, <hch@....de>
CC: <sagi@...mberg.me>, <david@...hat.com>, <oren@...dia.com>,
<linux-kernel@...r.kernel.org>, <akpm@...ux-foundation.org>,
Max Gurtovoy <mgurtovoy@...dia.com>
Subject: [PATCH 1/3] mm,memory_hotplug: export mhp min alignment
Hotplugged memory has alignmet restrictions. E.g, it disallows all
operations smaller than a sub-section and only allow operations smaller
than a section for SPARSEMEM_VMEMMAP. Export the alignment restrictions
for mhp users.
Signed-off-by: Max Gurtovoy <mgurtovoy@...dia.com>
---
include/linux/memory_hotplug.h | 5 +++++
mm/memory_hotplug.c | 33 +++++++++++++++++++--------------
2 files changed, 24 insertions(+), 14 deletions(-)
diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h
index 28f32fd00fe9..c55a9049b11e 100644
--- a/include/linux/memory_hotplug.h
+++ b/include/linux/memory_hotplug.h
@@ -76,6 +76,7 @@ struct mhp_params {
bool mhp_range_allowed(u64 start, u64 size, bool need_mapping);
struct range mhp_get_pluggable_range(bool need_mapping);
+unsigned long mhp_get_min_align(void);
/*
* Zone resizing functions
@@ -248,6 +249,10 @@ void mem_hotplug_done(void);
___page; \
})
+static inline unsigned long mhp_get_min_align(void)
+{
+ return 0;
+}
static inline unsigned zone_span_seqbegin(struct zone *zone)
{
return 0;
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 9e86e9ee0a10..161bb6704a9b 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -270,24 +270,29 @@ void __init register_page_bootmem_info_node(struct pglist_data *pgdat)
}
#endif /* CONFIG_HAVE_BOOTMEM_INFO_NODE */
+/*
+ * Disallow all operations smaller than a sub-section and only
+ * allow operations smaller than a section for
+ * SPARSEMEM_VMEMMAP. Note that check_hotplug_memory_range()
+ * enforces a larger memory_block_size_bytes() granularity for
+ * memory that will be marked online, so this check should only
+ * fire for direct arch_{add,remove}_memory() users outside of
+ * add_memory_resource().
+ */
+unsigned long mhp_get_min_align(void)
+{
+ if (IS_ENABLED(CONFIG_SPARSEMEM_VMEMMAP))
+ return PAGES_PER_SUBSECTION;
+ return PAGES_PER_SECTION;
+}
+EXPORT_SYMBOL_GPL(mhp_get_min_align);
+
+
static int check_pfn_span(unsigned long pfn, unsigned long nr_pages,
const char *reason)
{
- /*
- * Disallow all operations smaller than a sub-section and only
- * allow operations smaller than a section for
- * SPARSEMEM_VMEMMAP. Note that check_hotplug_memory_range()
- * enforces a larger memory_block_size_bytes() granularity for
- * memory that will be marked online, so this check should only
- * fire for direct arch_{add,remove}_memory() users outside of
- * add_memory_resource().
- */
- unsigned long min_align;
+ unsigned long min_align = mhp_get_min_align();
- if (IS_ENABLED(CONFIG_SPARSEMEM_VMEMMAP))
- min_align = PAGES_PER_SUBSECTION;
- else
- min_align = PAGES_PER_SECTION;
if (!IS_ALIGNED(pfn, min_align) || !IS_ALIGNED(nr_pages, min_align)) {
WARN(1, "Misaligned __%s_pages min_align: %#lx start: %#lx end: %#lx\n",
reason, min_align, pfn, pfn + nr_pages - 1);
--
2.18.1
Powered by blists - more mailing lists