[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160614144103.GB1868@techsingularity.net>
Date: Tue, 14 Jun 2016 15:41:03 +0100
From: Mel Gorman <mgorman@...hsingularity.net>
To: Vlastimil Babka <vbabka@...e.cz>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Linux-MM <linux-mm@...ck.org>, Rik van Riel <riel@...riel.com>,
Johannes Weiner <hannes@...xchg.org>,
LKML <linux-kernel@...r.kernel.org>,
Peter Zijlstra <peterz@...radead.org>
Subject: Re: [PATCH 02/27] mm, vmscan: Move lru_lock to the node
On Fri, Jun 10, 2016 at 06:39:26PM +0200, Vlastimil Babka wrote:
> > @@ -5944,10 +5944,10 @@ static void __paginginit free_area_init_core(struct pglist_data *pgdat)
> > zone->min_slab_pages = (freesize * sysctl_min_slab_ratio) / 100;
> > #endif
> > zone->name = zone_names[j];
> > + zone->zone_pgdat = pgdat;
> > spin_lock_init(&zone->lock);
> > - spin_lock_init(&zone->lru_lock);
> > + spin_lock_init(zone_lru_lock(zone));
>
> This means the same lock will be inited MAX_NR_ZONES times. Peterz told
> me it's valid but weird. Probably better to do it just once, in case
> lockdep/lock debugging gains some checks for that?
>
Good point and it's an easy fix.
--
Mel Gorman
SUSE Labs
Powered by blists - more mailing lists