[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20160210105845.973cecc56906ed950fbdd8ba@linux-foundation.org>
Date: Wed, 10 Feb 2016 10:58:45 -0800
From: Andrew Morton <akpm@...ux-foundation.org>
To: Vlastimil Babka <vbabka@...e.cz>
Cc: Joonsoo Kim <js1304@...il.com>, Aaron Lu <aaron.lu@...el.com>,
Mel Gorman <mgorman@...e.de>, Rik van Riel <riel@...hat.com>,
David Rientjes <rientjes@...gle.com>,
LKML <linux-kernel@...r.kernel.org>,
Linux Memory Management List <linux-mm@...ck.org>,
Joonsoo Kim <iamjoonsoo.kim@....com>
Subject: Re: [PATCH v2 3/3] mm/compaction: speed up pageblock_pfn_to_page()
when zone is contiguous
On Wed, 10 Feb 2016 14:42:57 +0100 Vlastimil Babka <vbabka@...e.cz> wrote:
> > --- a/mm/memory_hotplug.c
> > +++ b/mm/memory_hotplug.c
> > @@ -509,6 +509,8 @@ int __ref __add_pages(int nid, struct zone *zone, unsigned long phys_start_pfn,
> > int start_sec, end_sec;
> > struct vmem_altmap *altmap;
> >
> > + clear_zone_contiguous(zone);
> > +
> > /* during initialize mem_map, align hot-added range to section */
> > start_sec = pfn_to_section_nr(phys_start_pfn);
> > end_sec = pfn_to_section_nr(phys_start_pfn + nr_pages - 1);
> > @@ -540,6 +542,8 @@ int __ref __add_pages(int nid, struct zone *zone, unsigned long phys_start_pfn,
> > }
> > vmemmap_populate_print_last();
> >
> > + set_zone_contiguous(zone);
> > +
> > return err;
> > }
> > EXPORT_SYMBOL_GPL(__add_pages);
>
> Between the clear and set, __add_pages() might return with -EINVAL,
> leaving the flag cleared potentially forever. Not critical, probably
> rare, but it should be possible to avoid this by moving the clear below
> the altmap check?
um, yes. return-in-the-middle-of-a-function strikes again.
--- a/mm/memory_hotplug.c~mm-compaction-speed-up-pageblock_pfn_to_page-when-zone-is-contiguous-fix
+++ a/mm/memory_hotplug.c
@@ -526,7 +526,8 @@ int __ref __add_pages(int nid, struct zo
if (altmap->base_pfn != phys_start_pfn
|| vmem_altmap_offset(altmap) > nr_pages) {
pr_warn_once("memory add fail, invalid altmap\n");
- return -EINVAL;
+ err = -EINVAL;
+ goto out;
}
altmap->alloc = 0;
}
@@ -544,9 +545,8 @@ int __ref __add_pages(int nid, struct zo
err = 0;
}
vmemmap_populate_print_last();
-
+out:
set_zone_contiguous(zone);
-
return err;
}
EXPORT_SYMBOL_GPL(__add_pages);
_
Powered by blists - more mailing lists