[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20081204110013.1D62.KOSAKI.MOTOHIRO@jp.fujitsu.com>
Date: Thu, 4 Dec 2008 11:14:51 +0900 (JST)
From: KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>
To: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
Cc: kosaki.motohiro@...fujitsu.com,
Dave Hansen <dave@...ux.vnet.ibm.com>,
gerald.schaefer@...ibm.com, akpm@...ux-foundation.org,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
schwidefsky@...ibm.com, heiko.carstens@...ibm.com,
y-goto@...fujitsu.com, npiggin@...e.de,
Lee Schermerhorn <Lee.Schermerhorn@...com>,
Christoph Lameter <cl@...ux-foundation.org>
Subject: [PATCH] mm: remove UP version lru_add_drain_all()
(CC to Christoph Lameter and Lee Schermerhorn)
> On Wed, 03 Dec 2008 14:16:07 -0800
> Dave Hansen <dave@...ux.vnet.ibm.com> wrote:
> > > This let us run
> > > into the BUG_ON(!PageBuddy(page)) in __offline_isolated_pages() during
> > > memory hotplug stress test on s390. The page in question was still on the
> > > pcp list, because of a race with lru_add_drain_all() and drain_all_pages()
> > > on different cpus.
> > >
> > > This is fixed with this patch by adding CONFIG_MEMORY_HOTREMOVE to the
> > > lru_add_drain_all() #ifdef, to let it run on each cpu.
> >
> > This doesn't seem right to me. CONFIG_MEMORY_HOTREMOVE doesn't change
> > the layout of the LRUs, unlike NUMA or UNEVICTABLE_LRU. So, I think
> > this bug is more due to the hotremove code mis-expecting behavior out of
> > lru_add_drain_all().
> >
> How about
>
> #ifdef CONFIG_SMP
>
> #else..
>
> #endif
>
> rather than
>
> -#if defined(CONFIG_NUMA) || defined(CONFIG_UNEVICTABLE_LRU)
> +#if defined(CONFIG_NUMA) || defined(CONFIG_UNEVICTABLE_LRU) || \
> + defined(CONFIG_MEMORY_HOTREMOVE)
> ...
The default value of CONFIG_UNEVICTABLE_LRU is ON.
Then, almost machine use CONFIG_NUMA version lru_add_drain_all().
Therefore, this config option is not so valuable.
I like simple removing.
following patch can boot on UP machine.
===
Currently, lru_add_drain_all() has two version.
(1) use schedule_on_each_cpu()
(2) don't use schedule_on_each_cpu()
Gerald Schaefer reported it doesn't works well on SMP (not NUMA) S390 machine.
offline_pages() calls lru_add_drain_all() followed by drain_all_pages().
While drain_all_pages() works on each cpu, lru_add_drain_all() only runs
on the current cpu for architectures w/o CONFIG_NUMA. This let us run
into the BUG_ON(!PageBuddy(page)) in __offline_isolated_pages() during
memory hotplug stress test on s390. The page in question was still on the
pcp list, because of a race with lru_add_drain_all() and drain_all_pages()
on different cpus.
Actually, Almost machine has CONFIG_UNEVICTABLE_LRU=y. Then almost machine use
(1) version lru_add_drain_all although the machine is UP.
Then this ifdef is not valueable.
simple removing is better.
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>
CC: Christoph Lameter <cl@...ux-foundation.org>
CC: Lee Schermerhorn <Lee.Schermerhorn@...com>
---
mm/swap.c | 13 -------------
1 file changed, 13 deletions(-)
Index: b/mm/swap.c
===================================================================
--- a/mm/swap.c 2008-11-24 19:33:26.000000000 +0900
+++ b/mm/swap.c 2008-12-04 09:49:05.000000000 +0900
@@ -299,7 +299,6 @@ void lru_add_drain(void)
put_cpu();
}
-#if defined(CONFIG_NUMA) || defined(CONFIG_UNEVICTABLE_LRU)
static void lru_add_drain_per_cpu(struct work_struct *dummy)
{
lru_add_drain();
@@ -313,18 +312,6 @@ int lru_add_drain_all(void)
return schedule_on_each_cpu(lru_add_drain_per_cpu);
}
-#else
-
-/*
- * Returns 0 for success
- */
-int lru_add_drain_all(void)
-{
- lru_add_drain();
- return 0;
-}
-#endif
-
/*
* Batched page_cache_release(). Decrement the reference count on all the
* passed pages. If it fell to zero then remove the page from the LRU and
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists