[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0000013e85732d03-05e35c8e-205e-4242-98f5-2ae7bda64c5c-000000@email.amazonses.com>
Date: Wed, 8 May 2013 18:41:58 +0000
From: Christoph Lameter <cl@...ux.com>
To: Mel Gorman <mgorman@...e.de>
cc: Linux-MM <linux-mm@...ck.org>,
Johannes Weiner <hannes@...xchg.org>,
Dave Hansen <dave@...1.net>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 09/22] mm: page allocator: Allocate/free order-0 pages
from a per-zone magazine
On Wed, 8 May 2013, Mel Gorman wrote:
> 1. IRQs do not have to be disabled to access the lists reducing IRQs
> disabled times.
The per cpu structure access also would not need to disable irq if the
fast path would be using this_cpu ops.
> 2. As the list is protected by a spinlock, it is not necessary to
> send IPI to drain the list. As the lists are accessible by multiple CPUs,
> it is easier to tune.
The lists are a problem since traversing list heads creates a lot of
pressure on the processor and TLB caches. Could we either move to an array
of pointers to page structs (like in SLAB) or to a linked list that is
constrained within physical boundaries like within a PMD? (comparable
to the SLUB approach)?
> > 3. The magazine_lock is potentially hot but it can be split to have
> one lock per CPU socket to reduce contention. Draining the lists
> in this case would acquire multiple locks be acquired.
IMHO the use of per cpu RMV operations would be lower latency than the use
of spinlocks. There is no "lock" prefix overhead with those. Page
allocation is a frequent operation that I would think needs to be as fast
as possible.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists