[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <BANLkTim-AnEeL=z1sYm=iN7sMnG0+m0SHw@mail.gmail.com>
Date: Sun, 15 May 2011 12:12:36 -0400
From: Andrew Lutomirski <luto@....edu>
To: Wu Fengguang <fengguang.wu@...el.com>
Cc: Minchan Kim <minchan.kim@...il.com>,
Andi Kleen <andi@...stfloor.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: Kernel falls apart under light memory pressure (i.e. linking vmlinux)
On Sun, May 15, 2011 at 11:27 AM, Wu Fengguang <fengguang.wu@...el.com> wrote:
> On Sun, May 15, 2011 at 09:37:58AM +0800, Minchan Kim wrote:
>> On Sun, May 15, 2011 at 2:43 AM, Andi Kleen <andi@...stfloor.org> wrote:
>> > Copying back linux-mm.
>> >
>> >> Recently, we added following patch.
>> >> https://lkml.org/lkml/2011/4/26/129
>> >> If it's a culprit, the patch should solve the problem.
>> >
>> > It would be probably better to not do the allocations at all under
>> > memory pressure. Even if the RA allocation doesn't go into reclaim
>>
>> Fair enough.
>> I think we can do it easily now.
>> If page_cache_alloc_readahead(ie, GFP_NORETRY) is fail, we can adjust
>> RA window size or turn off a while. The point is that we can use the
>> fail of __do_page_cache_readahead as sign of memory pressure.
>> Wu, What do you think?
>
> No, disabling readahead can hardly help.
>
> The sequential readahead memory consumption can be estimated by
>
> 2 * (number of concurrent read streams) * (readahead window size)
>
> And you can double that when there are two level of readaheads.
>
> Since there are hardly any concurrent read streams in Andy's case,
> the readahead memory consumption will be ignorable.
>
> Typically readahead thrashing will happen long before excessive
> GFP_NORETRY failures, so the reasonable solutions are to
>
> - shrink readahead window on readahead thrashing
> (current readahead heuristic can somehow do this, and I have patches
> to further improve it)
>
> - prevent abnormal GFP_NORETRY failures
> (when there are many reclaimable pages)
>
>
> Andy's OOM memory dump (incorrect_oom_kill.txt.xz) shows that there are
>
> - 8MB active+inactive file pages
> - 160MB active+inactive anon pages
> - 1GB shmem pages
> - 1.4GB unevictable pages
>
> Hmm, why are there so many unevictable pages? How come the shmem
> pages become unevictable when there are plenty of swap space?
That was probably because one of my testcases creates a 1.4GB file on
ramfs. (I can provoke the problem without doing evil things like
that, but the test script is rather reliable at killing my system and
it works fine on my other machines.)
If you want, I can try to generate a trace that isn't polluted with
the evil ramfs file.
--Andy
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists