[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <20070127030143.3059dbb0.kamezawa.hiroyu@jp.fujitsu.com>
Date: Sat, 27 Jan 2007 03:01:43 +0900
From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
To: Andrew Morton <akpm@...l.org>
Cc: clameter@....com, aubreylee@...il.com, svaidy@...ux.vnet.ibm.com,
nickpiggin@...oo.com.au, rgetz@...ckfin.uclinux.org,
Michael.Hennerich@...log.com, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [RFC] Limit the size of the pagecache
On Fri, 26 Jan 2007 02:29:55 -0800
Andrew Morton <akpm@...l.org> wrote:
> On Wed, 24 Jan 2007 14:15:10 +0900
> KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com> wrote:
>
> > - One for stability
> > When a customer constructs their detabase(Oracle), the system often goes to oom.
> > This is because that the system cannot allocate DMA_ZOME memory for 32bit device.
> > (USB or e100)
> > Not allowing to use almost all pages as page cache (for temporal use) will be some help.
> > (Note: construction DB on ext3....so all writes are serialized and the system couldn't
> > free page cache.)
>
> I'm surprised that any reasonable driver has a dependency on ZONE_DMA. Are
> you sure? Send full oom-killer output, please.
>
>
Our ia64 server's USB/e100 device uses 32bit-PCI, so sometimes OOM happens on DMA zone.
(ia64's ZONE_DMA is 0-4G area.)
But very sorry....I was confused.
I looked the issue above again and found ZONE_NORMAL/x86 was exhausted.
This was interesiting incident,
Constructing DB on 4Gb system has no problem.
Constructing DB on 8Gb system always causes OOM.
I asked the users to change DB's parameter. (this happened on RHEL4/linux-2.6.9 series)
> > And...some customers want to keep memory Free as much as possible.
> > 99% memory usage makes insecure them ;)
>
> Tell them to do "echo 3 > /proc/sys/vm/drop_caches", then wait three minutes?
Ah, maybe we can use it on RHEL5. We'll test it. thank you.
Thanks,
-Kamezawa
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists