[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <201512310005.DFJ21839.QOOSVFFHMLJOtF@I-love.SAKURA.ne.jp>
Date: Thu, 31 Dec 2015 00:05:48 +0900
From: Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>
To: mhocko@...nel.org
Cc: akpm@...ux-foundation.org, torvalds@...ux-foundation.org,
hannes@...xchg.org, mgorman@...e.de, rientjes@...gle.com,
hillf.zj@...baba-inc.com, kamezawa.hiroyu@...fujitsu.com,
linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 0/3] OOM detection rework v4
Michal Hocko wrote:
> On Mon 28-12-15 21:08:56, Tetsuo Handa wrote:
> > Tetsuo Handa wrote:
> > > I got OOM killers while running heavy disk I/O (extracting kernel source,
> > > running lxr's genxref command). (Environ: 4 CPUs / 2048MB RAM / no swap / XFS)
> > > Do you think these OOM killers reasonable? Too weak against fragmentation?
> >
> > Well, current patch invokes OOM killers when more than 75% of memory is used
> > for file cache (active_file: + inactive_file:). I think this is a surprising
> > thing for administrators and we want to retry more harder (but not forever,
> > please).
>
> Here again, it would be good to see what is the comparision between
> the original and the new behavior. 75% of a page cache is certainly
> unexpected but those pages might be pinned for other reasons and so
> unreclaimable and basically IO bound. This is hard to optimize for
> without causing any undesirable side effects for other loads. I will
> have a look at the oom reports later but having a comparision would be
> a great start.
Prior to "mm, oom: rework oom detection" patch (the original), this stressor
never invoked the OOM killer. After this patch (the new), this stressor easily
invokes the OOM killer. Both the original and the new case, active_file: +
inactive_file: occupies nearly 75%. I think we lost invisible retry logic for
order > 0 allocation requests.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists