[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200210032512.GY8965@MiWiFi-R3L-srv>
Date: Mon, 10 Feb 2020 11:25:12 +0800
From: Baoquan He <bhe@...hat.com>
To: Wei Yang <richardw.yang@...ux.intel.com>
Cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org,
akpm@...ux-foundation.org, dan.j.williams@...el.com,
david@...hat.com
Subject: Re: [PATCH 1/7] mm/sparse.c: Introduce new function
fill_subsection_map()
On 02/10/20 at 07:05am, Wei Yang wrote:
> >-static struct page * __meminit section_activate(int nid, unsigned long pfn,
> >- unsigned long nr_pages, struct vmem_altmap *altmap)
> >+/**
> >+ * fill_subsection_map - fill subsection map of a memory region
> >+ * @pfn - start pfn of the memory range
> >+ * @nr_pages - number of pfns to add in the region
> >+ *
> >+ * This clears the related subsection map inside one section, and only
>
> s/clears/fills/ ?
Good catch, thanks for your careful review.
I will wait a while to see if there's any input from other reviewers,
then update this post accordingly together.
>
> >+ * intended for hotplug.
> >+ *
> >+ * Return:
> >+ * * 0 - On success.
> >+ * * -EINVAL - Invalid memory region.
> >+ * * -EEXIST - Subsection map has been set.
> >+ */
> >+static int fill_subsection_map(unsigned long pfn, unsigned long nr_pages)
> > {
> >- DECLARE_BITMAP(map, SUBSECTIONS_PER_SECTION) = { 0 };
> > struct mem_section *ms = __pfn_to_section(pfn);
> >- struct mem_section_usage *usage = NULL;
> >+ DECLARE_BITMAP(map, SUBSECTIONS_PER_SECTION) = { 0 };
> > unsigned long *subsection_map;
> >- struct page *memmap;
> > int rc = 0;
> >
> > subsection_mask_set(map, pfn, nr_pages);
> >
> >- if (!ms->usage) {
> >- usage = kzalloc(mem_section_usage_size(), GFP_KERNEL);
> >- if (!usage)
> >- return ERR_PTR(-ENOMEM);
> >- ms->usage = usage;
> >- }
> > subsection_map = &ms->usage->subsection_map[0];
> >
> > if (bitmap_empty(map, SUBSECTIONS_PER_SECTION))
> >@@ -816,6 +820,25 @@ static struct page * __meminit section_activate(int nid, unsigned long pfn,
> > bitmap_or(subsection_map, map, subsection_map,
> > SUBSECTIONS_PER_SECTION);
> >
> >+ return rc;
> >+}
> >+
> >+static struct page * __meminit section_activate(int nid, unsigned long pfn,
> >+ unsigned long nr_pages, struct vmem_altmap *altmap)
> >+{
> >+ struct mem_section *ms = __pfn_to_section(pfn);
> >+ struct mem_section_usage *usage = NULL;
> >+ struct page *memmap;
> >+ int rc = 0;
> >+
> >+ if (!ms->usage) {
> >+ usage = kzalloc(mem_section_usage_size(), GFP_KERNEL);
> >+ if (!usage)
> >+ return ERR_PTR(-ENOMEM);
> >+ ms->usage = usage;
> >+ }
> >+
> >+ rc = fill_subsection_map(pfn, nr_pages);
> > if (rc) {
> > if (usage)
> > ms->usage = NULL;
> >--
> >2.17.2
>
> --
> Wei Yang
> Help you, Help me
>
Powered by blists - more mailing lists