[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1557357332.3028.42.camel@suse.de>
Date: Thu, 09 May 2019 01:15:32 +0200
From: Oscar Salvador <osalvador@...e.de>
To: Dan Williams <dan.j.williams@...el.com>, akpm@...ux-foundation.org
Cc: Michal Hocko <mhocko@...e.com>, Vlastimil Babka <vbabka@...e.cz>,
Logan Gunthorpe <logang@...tatee.com>,
Pavel Tatashin <pasha.tatashin@...een.com>,
linux-nvdimm@...ts.01.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v8 09/12] mm/sparsemem: Support sub-section hotplug
On Mon, 2019-05-06 at 16:40 -0700, Dan Williams wrote:
> @@ -741,49 +895,31 @@ int __meminit sparse_add_section(int nid,
> unsigned long start_pfn,
> unsigned long nr_pages, struct vmem_altmap *altmap)
> {
> unsigned long section_nr = pfn_to_section_nr(start_pfn);
> - struct mem_section_usage *usage;
> struct mem_section *ms;
> struct page *memmap;
> int ret;
I already pointed this out in v7, but:
>
> - /*
> - * no locking for this, because it does its own
> - * plus, it does a kmalloc
> - */
> ret = sparse_index_init(section_nr, nid);
> if (ret < 0 && ret != -EEXIST)
> return ret;
sparse_index_init() only returns either -ENOMEM or 0, so the above can
be:
if (ret < 0) or if (ret)
> - ret = 0;
> - memmap = populate_section_memmap(start_pfn,
> PAGES_PER_SECTION, nid,
> - altmap);
> - if (!memmap)
> - return -ENOMEM;
> - usage = kzalloc(mem_section_usage_size(), GFP_KERNEL);
> - if (!usage) {
> - depopulate_section_memmap(start_pfn,
> PAGES_PER_SECTION, altmap);
> - return -ENOMEM;
> - }
>
> - ms = __pfn_to_section(start_pfn);
> - if (ms->section_mem_map & SECTION_MARKED_PRESENT) {
> - ret = -EEXIST;
> - goto out;
> - }
> + memmap = section_activate(nid, start_pfn, nr_pages, altmap);
> + if (IS_ERR(memmap))
> + return PTR_ERR(memmap);
> + ret = 0;
If we got here, sparse_index_init must have returned 0, so ret already
contains 0.
We can remove the assignment.
>
> /*
> * Poison uninitialized struct pages in order to catch
> invalid flags
> * combinations.
> */
> - page_init_poison(memmap, sizeof(struct page) *
> PAGES_PER_SECTION);
> + page_init_poison(pfn_to_page(start_pfn), sizeof(struct page)
> * nr_pages);
>
> + ms = __pfn_to_section(start_pfn);
> section_mark_present(ms);
> - sparse_init_one_section(ms, section_nr, memmap, usage);
> + sparse_init_one_section(ms, section_nr, memmap, ms->usage);
>
> -out:
> - if (ret < 0) {
> - kfree(usage);
> - depopulate_section_memmap(start_pfn,
> PAGES_PER_SECTION, altmap);
> - }
> + if (ret < 0)
> + section_deactivate(start_pfn, nr_pages, nid,
> altmap);
AFAICS, ret is only set by the return code from sparse_index_init, so
we cannot really get to this code being ret different than 0.
So we can remove the above two lines.
I will start reviewing the patches that lack review from this version
soon.
--
Oscar Salvador
SUSE L3
Powered by blists - more mailing lists