[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <cover.1413804554.git.vdavydov@parallels.com>
Date: Mon, 20 Oct 2014 15:50:28 +0400
From: Vladimir Davydov <vdavydov@...allels.com>
To: Andrew Morton <akpm@...ux-foundation.org>
CC: Zefan Li <lizefan@...wei.com>, Christoph Lameter <cl@...ux.com>,
Pekka Enberg <penberg@...nel.org>,
David Rientjes <rientjes@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
<linux-kernel@...r.kernel.org>, <linux-mm@...ck.org>
Subject: [PATCH RESEND 0/4] Simplify cpuset API and fix cpuset check in SL[AU]B
[Rebased on top of 3.18-rc1 and added acks from Christoph and Zefan]
Hi,
SLAB and SLUB use hardwall cpuset check on fallback alloc, while the
page allocator uses softwall check for all kernel allocations. This may
result in falling into the page allocator even if there are free objects
on other nodes. SLAB algorithm is especially affected: the number of
objects allocated in vain is unlimited, so that they theoretically can
eat up a whole NUMA node. For more details see comments to patches 3, 4.
When I last sent a fix (https://lkml.org/lkml/2014/8/10/100), David
found the whole cpuset API being cumbersome and proposed to simplify it
before getting to fixing its users. So this patch set addresses both
David's complain (patches 1, 2) and the SL[AU]B issues (patches 3, 4).
Reviews are appreciated.
Thanks,
Vladimir Davydov (4):
cpuset: convert callback_mutex to a spinlock
cpuset: simplify cpuset_node_allowed API
slab: fix cpuset check in fallback_alloc
slub: fix cpuset check in get_any_partial
include/linux/cpuset.h | 37 +++--------
kernel/cpuset.c | 162 +++++++++++++++++-------------------------------
mm/hugetlb.c | 2 +-
mm/oom_kill.c | 2 +-
mm/page_alloc.c | 6 +-
mm/slab.c | 2 +-
mm/slub.c | 2 +-
mm/vmscan.c | 5 +-
8 files changed, 74 insertions(+), 144 deletions(-)
--
1.7.10.4
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists