[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <518BBEE9.7060800@sr71.net>
Date: Thu, 09 May 2013 08:21:13 -0700
From: Dave Hansen <dave@...1.net>
To: Mel Gorman <mgorman@...e.de>
CC: Linux-MM <linux-mm@...ck.org>,
Johannes Weiner <hannes@...xchg.org>,
Christoph Lameter <cl@...ux.com>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 18/22] mm: page allocator: Split magazine lock in two
to reduce contention
On 05/08/2013 09:03 AM, Mel Gorman wrote:
> @@ -368,10 +375,9 @@ struct zone {
>
> /*
> * Keep some order-0 pages on a separate free list
> - * protected by an irq-unsafe lock
> + * protected by an irq-unsafe lock.
> */
> - spinlock_t _magazine_lock;
> - struct free_area_magazine _noirq_magazine;
> + struct free_magazine noirq_magazine[NR_MAGAZINES];
Looks like pretty cool stuff!
The old per-cpu-pages stuff was all hung off alloc_percpu(), which
surely wasted lots of memory with many NUMA nodes. It's nice to see
this decoupled a bit from the online cpu count.
That said, the alloc_percpu() stuff is nice in how much it hides from
you when doing cpu hotplug. We'll _probably_ need this to be
dynamically-sized at some point, right?
> -static inline struct free_area_magazine *find_lock_magazine(struct zone *zone)
> +static inline struct free_magazine *lock_magazine(struct zone *zone)
> {
> - struct free_area_magazine *area = &zone->_noirq_magazine;
> - spin_lock(&zone->_magazine_lock);
> - return area;
> + int i = (raw_smp_processor_id() >> 1) & (NR_MAGAZINES-1);
> + spin_lock(&zone->noirq_magazine[i].lock);
> + return &zone->noirq_magazine[i];
> }
I bet this logic will be fun to play with once we have more magazines
around. For instance, on my system processors 0/80 are HT twins, so
they'd always be going after the same magazine. I guess that's a good
thing.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists