[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20200205100723.GD24162@richard>
Date: Wed, 5 Feb 2020 18:07:23 +0800
From: Wei Yang <richardw.yang@...ux.intel.com>
To: David Hildenbrand <david@...hat.com>
Cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org,
linux-arm-kernel@...ts.infradead.org, linux-ia64@...r.kernel.org,
linuxppc-dev@...ts.ozlabs.org, linux-s390@...r.kernel.org,
linux-sh@...r.kernel.org, x86@...nel.org,
Andrew Morton <akpm@...ux-foundation.org>,
Oscar Salvador <osalvador@...e.de>,
Michal Hocko <mhocko@...e.com>,
Pavel Tatashin <pasha.tatashin@...een.com>,
Dan Williams <dan.j.williams@...el.com>,
Wei Yang <richardw.yang@...ux.intel.com>
Subject: Re: [PATCH v6 09/10] mm/memory_hotplug: Drop local variables in
shrink_zone_span()
On Sun, Oct 06, 2019 at 10:56:45AM +0200, David Hildenbrand wrote:
>Get rid of the unnecessary local variables.
>
>Cc: Andrew Morton <akpm@...ux-foundation.org>
>Cc: Oscar Salvador <osalvador@...e.de>
>Cc: David Hildenbrand <david@...hat.com>
>Cc: Michal Hocko <mhocko@...e.com>
>Cc: Pavel Tatashin <pasha.tatashin@...een.com>
>Cc: Dan Williams <dan.j.williams@...el.com>
>Cc: Wei Yang <richardw.yang@...ux.intel.com>
>Signed-off-by: David Hildenbrand <david@...hat.com>
Looks reasonable.
Reviewed-by: Wei Yang <richardw.yang@...ux.intel.com>
>---
> mm/memory_hotplug.c | 15 ++++++---------
> 1 file changed, 6 insertions(+), 9 deletions(-)
>
>diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
>index 8dafa1ba8d9f..843481bd507d 100644
>--- a/mm/memory_hotplug.c
>+++ b/mm/memory_hotplug.c
>@@ -374,14 +374,11 @@ static unsigned long find_biggest_section_pfn(int nid, struct zone *zone,
> static void shrink_zone_span(struct zone *zone, unsigned long start_pfn,
> unsigned long end_pfn)
> {
>- unsigned long zone_start_pfn = zone->zone_start_pfn;
>- unsigned long z = zone_end_pfn(zone); /* zone_end_pfn namespace clash */
>- unsigned long zone_end_pfn = z;
> unsigned long pfn;
> int nid = zone_to_nid(zone);
>
> zone_span_writelock(zone);
>- if (zone_start_pfn == start_pfn) {
>+ if (zone->zone_start_pfn == start_pfn) {
> /*
> * If the section is smallest section in the zone, it need
> * shrink zone->zone_start_pfn and zone->zone_spanned_pages.
>@@ -389,25 +386,25 @@ static void shrink_zone_span(struct zone *zone, unsigned long start_pfn,
> * for shrinking zone.
> */
> pfn = find_smallest_section_pfn(nid, zone, end_pfn,
>- zone_end_pfn);
>+ zone_end_pfn(zone));
> if (pfn) {
>+ zone->spanned_pages = zone_end_pfn(zone) - pfn;
> zone->zone_start_pfn = pfn;
>- zone->spanned_pages = zone_end_pfn - pfn;
> } else {
> zone->zone_start_pfn = 0;
> zone->spanned_pages = 0;
> }
>- } else if (zone_end_pfn == end_pfn) {
>+ } else if (zone_end_pfn(zone) == end_pfn) {
> /*
> * If the section is biggest section in the zone, it need
> * shrink zone->spanned_pages.
> * In this case, we find second biggest valid mem_section for
> * shrinking zone.
> */
>- pfn = find_biggest_section_pfn(nid, zone, zone_start_pfn,
>+ pfn = find_biggest_section_pfn(nid, zone, zone->zone_start_pfn,
> start_pfn);
> if (pfn)
>- zone->spanned_pages = pfn - zone_start_pfn + 1;
>+ zone->spanned_pages = pfn - zone->zone_start_pfn + 1;
> else {
> zone->zone_start_pfn = 0;
> zone->spanned_pages = 0;
>--
>2.21.0
--
Wei Yang
Help you, Help me
Powered by blists - more mailing lists