[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20151229163249.GD10321@dhcp22.suse.cz>
Date: Tue, 29 Dec 2015 17:32:50 +0100
From: Michal Hocko <mhocko@...nel.org>
To: Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>
Cc: akpm@...ux-foundation.org, torvalds@...ux-foundation.org,
hannes@...xchg.org, mgorman@...e.de, rientjes@...gle.com,
hillf.zj@...baba-inc.com, kamezawa.hiroyu@...fujitsu.com,
linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 0/3] OOM detection rework v4
On Mon 28-12-15 21:08:56, Tetsuo Handa wrote:
> Tetsuo Handa wrote:
> > I got OOM killers while running heavy disk I/O (extracting kernel source,
> > running lxr's genxref command). (Environ: 4 CPUs / 2048MB RAM / no swap / XFS)
> > Do you think these OOM killers reasonable? Too weak against fragmentation?
>
> Well, current patch invokes OOM killers when more than 75% of memory is used
> for file cache (active_file: + inactive_file:). I think this is a surprising
> thing for administrators and we want to retry more harder (but not forever,
> please).
Here again, it would be good to see what is the comparision between
the original and the new behavior. 75% of a page cache is certainly
unexpected but those pages might be pinned for other reasons and so
unreclaimable and basically IO bound. This is hard to optimize for
without causing any undesirable side effects for other loads. I will
have a look at the oom reports later but having a comparision would be
a great start.
Thanks!
--
Michal Hocko
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists