[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <54905F87.2030302@jp.fujitsu.com>
Date: Wed, 17 Dec 2014 01:36:23 +0900
From: Kamezawa Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
To: Lai Jiangshan <laijs@...fujitsu.com>, Tejun Heo <tj@...nel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
CC: "Ishimatsu, Yasuaki/石松 靖章" <isimatu.yasuaki@...fujitsu.com>,
Tang Chen <tangchen@...fujitsu.com>,
"guz.fnst@...fujitsu.com" <guz.fnst@...fujitsu.com>,
Kamezawa Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
Subject: [PATCH 0/2] workqueue: fix a bug when numa mapping is changed v4
This is v4. Thank you for hints/commentes to previous versions.
I think this versions only contains necessary things and not invasive.
Tested several patterns of node hotplug and seems to work well.
Changes since v3
- removed changes against get_unbound_pool()
- remvoed codes in cpu offline event.
- added node unregister callback.
clear wq_numa_possible_mask at node offline rather than cpu offline.
- updates per-cpu pool's pool-> node at node_(un)register.
- added more comments.
- almost all codes are under CONFIG_MEMORY_HOTPLUG
include/linux/memory_hotplug.h | 3 +
kernel/workqueue.c | 81 ++++++++++++++++++++++++++++++++++++++++-
mm/memory_hotplug.c | 6 ++-
3 files changed, 88 insertions(+), 2 deletions(-)
Original problem was a memory allocation failure because pool->node
points to not-online node. This happens when cpu<->node mapping changes.
Yasuaki Ishimatsu hit a allocation failure bug when the numa mapping
between CPU and node is changed. This was the last scene:
SLUB: Unable to allocate memory on node 2 (gfp=0x80d0)
cache: kmalloc-192, object size: 192, buffer size: 192, default order: 1, min order: 0
node 0: slabs: 6172, objs: 259224, free: 245741
node 1: slabs: 3261, objs: 136962, free: 127656
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists