[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20170123153906.3122-1-mgorman@techsingularity.net>
Date: Mon, 23 Jan 2017 15:39:02 +0000
From: Mel Gorman <mgorman@...hsingularity.net>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Linux Kernel <linux-kernel@...r.kernel.org>,
Linux-MM <linux-mm@...ck.org>, Vlastimil Babka <vbabka@...e.cz>,
Hillf Danton <hillf.zj@...baba-inc.com>,
Jesper Dangaard Brouer <brouer@...hat.com>,
Mel Gorman <mgorman@...hsingularity.net>
Subject: [PATCH 0/4] Use per-cpu allocator for !irq requests and prepare for a bulk allocator v5
This is rebased on top of mmotm to handle collisions with Vlastimil's
series on cpusets and premature OOMs.
Changelog since v4
o Protect drain with get_online_cpus
o Micro-optimisation of stat updates
o Avoid double preparing a page free
Changelog since v3
o Debugging check in allocation path
o Make it harder to use the free path incorrectly
o Use preempt-safe stats counter
o Do not use IPIs to drain the per-cpu allocator
Changelog since v2
o Add ack's and benchmark data
o Rebase to 4.10-rc3
Changelog since v1
o Remove a scheduler point from the allocation path
o Finalise the bulk allocator and test it
This series is motivated by a conversation led by Jesper Dangaard Brouer at
the last LSF/MM proposing a generic page pool for DMA-coherent pages. Part
of his motivation was due to the overhead of allocating multiple order-0
that led some drivers to use high-order allocations and splitting them. This
is very slow in some cases.
The first two patches in this series restructure the page allocator such
that it is relatively easy to introduce an order-0 bulk page allocator.
A patch exists to do that and has been handed over to Jesper until an
in-kernel users is created. The third patch prevents the per-cpu allocator
being drained from IPI context as that can potentially corrupt the list
after patch four is merged. The final patch alters the per-cpu alloctor
to make it exclusive to !irq requests. This cuts allocation/free overhead
by roughly 30%.
Performance tests from both Jesper and I are included in the patch.
mm/page_alloc.c | 282 ++++++++++++++++++++++++++++++++++++--------------------
1 file changed, 181 insertions(+), 101 deletions(-)
--
2.11.0
Powered by blists - more mailing lists