[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20161202112951.23346-1-mgorman@techsingularity.net>
Date: Fri, 2 Dec 2016 11:29:49 +0000
From: Mel Gorman <mgorman@...hsingularity.net>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Christoph Lameter <cl@...ux.com>, Michal Hocko <mhocko@...e.com>,
Vlastimil Babka <vbabka@...e.cz>,
Johannes Weiner <hannes@...xchg.org>,
Jesper Dangaard Brouer <brouer@...hat.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Linux-MM <linux-mm@...ck.org>,
Linux-Kernel <linux-kernel@...r.kernel.org>,
Mel Gorman <mgorman@...hsingularity.net>
Subject: [PATCH 0/2] High-order per-cpu cache v6
Changelog since v5
o Changelog clarification in patch 1
o Additional comments in patch 2
Changelog since v4
o Avoid pcp->count getting out of sync if struct page gets corrupted
Changelog since v3
o Allow high-order atomic allocations to use reserves
Changelog since v2
o Correct initialisation to avoid -Woverflow warning
The following is two patches that implement a per-cpu cache for high-order
allocations, primarily aimed at SLUB. The first patch is a bug fix that
is technically unrelated but was discovered by review and so batched
together. The second is the patch that implements the high-order pcpu cache.
include/linux/mmzone.h | 20 +++++++-
mm/page_alloc.c | 129 ++++++++++++++++++++++++++++++++-----------------
2 files changed, 103 insertions(+), 46 deletions(-)
--
2.10.2
Powered by blists - more mailing lists