[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110515152747.GA25905@localhost>
Date: Sun, 15 May 2011 23:27:47 +0800
From: Wu Fengguang <fengguang.wu@...el.com>
To: Minchan Kim <minchan.kim@...il.com>
Cc: Andi Kleen <andi@...stfloor.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
Andrew Lutomirski <luto@....edu>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: Kernel falls apart under light memory pressure (i.e. linking
vmlinux)
On Sun, May 15, 2011 at 09:37:58AM +0800, Minchan Kim wrote:
> On Sun, May 15, 2011 at 2:43 AM, Andi Kleen <andi@...stfloor.org> wrote:
> > Copying back linux-mm.
> >
> >> Recently, we added following patch.
> >> https://lkml.org/lkml/2011/4/26/129
> >> If it's a culprit, the patch should solve the problem.
> >
> > It would be probably better to not do the allocations at all under
> > memory pressure. Even if the RA allocation doesn't go into reclaim
>
> Fair enough.
> I think we can do it easily now.
> If page_cache_alloc_readahead(ie, GFP_NORETRY) is fail, we can adjust
> RA window size or turn off a while. The point is that we can use the
> fail of __do_page_cache_readahead as sign of memory pressure.
> Wu, What do you think?
No, disabling readahead can hardly help.
The sequential readahead memory consumption can be estimated by
2 * (number of concurrent read streams) * (readahead window size)
And you can double that when there are two level of readaheads.
Since there are hardly any concurrent read streams in Andy's case,
the readahead memory consumption will be ignorable.
Typically readahead thrashing will happen long before excessive
GFP_NORETRY failures, so the reasonable solutions are to
- shrink readahead window on readahead thrashing
(current readahead heuristic can somehow do this, and I have patches
to further improve it)
- prevent abnormal GFP_NORETRY failures
(when there are many reclaimable pages)
Andy's OOM memory dump (incorrect_oom_kill.txt.xz) shows that there are
- 8MB active+inactive file pages
- 160MB active+inactive anon pages
- 1GB shmem pages
- 1.4GB unevictable pages
Hmm, why are there so many unevictable pages? How come the shmem
pages become unevictable when there are plenty of swap space?
Thanks,
Fengguang
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists