[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1228342567.13111.11.camel@nimitz>
Date: Wed, 03 Dec 2008 14:16:07 -0800
From: Dave Hansen <dave@...ux.vnet.ibm.com>
To: gerald.schaefer@...ibm.com
Cc: akpm@...ux-foundation.org, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, schwidefsky@...ibm.com,
heiko.carstens@...ibm.com, kamezawa.hiroyu@...fujitsu.com,
y-goto@...fujitsu.com, npiggin@...e.de
Subject: Re: [PATCH] memory hotplug: run lru_add_drain_all() on each cpu
On Wed, 2008-12-03 at 22:25 +0100, Gerald Schaefer wrote:
> offline_pages() calls lru_add_drain_all() followed by drain_all_pages().
> While drain_all_pages() works on each cpu, lru_add_drain_all() only runs
> on the current cpu for architectures w/o CONFIG_NUMA.
I'm a bit confused why this is. Is this because the LRUs are per-zone
and we expected !CONFIG_NUMA systems to only have LRUs sitting on the
same (only) node as the current CPU?
This doesn't make any sense, though. The pagevecs that
drain_cpu_pagevecs() actually empties out are per-cpu.
> This let us run
> into the BUG_ON(!PageBuddy(page)) in __offline_isolated_pages() during
> memory hotplug stress test on s390. The page in question was still on the
> pcp list, because of a race with lru_add_drain_all() and drain_all_pages()
> on different cpus.
>
> This is fixed with this patch by adding CONFIG_MEMORY_HOTREMOVE to the
> lru_add_drain_all() #ifdef, to let it run on each cpu.
This doesn't seem right to me. CONFIG_MEMORY_HOTREMOVE doesn't change
the layout of the LRUs, unlike NUMA or UNEVICTABLE_LRU. So, I think
this bug is more due to the hotremove code mis-expecting behavior out of
lru_add_drain_all().
Why does this not affect the other lru_add_drain_all() users?
-- Dave
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists