[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160229203502.GW16930@dhcp22.suse.cz>
Date: Mon, 29 Feb 2016 21:35:02 +0100
From: Michal Hocko <mhocko@...nel.org>
To: Hugh Dickins <hughd@...gle.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Johannes Weiner <hannes@...xchg.org>,
Mel Gorman <mgorman@...e.de>,
David Rientjes <rientjes@...gle.com>,
Tetsuo Handa <penguin-kernel@...ove.sakura.ne.jp>,
Hillf Danton <hillf.zj@...baba-inc.com>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
linux-mm@...ck.org, LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 0/3] OOM detection rework v4
On Wed 24-02-16 19:47:06, Hugh Dickins wrote:
[...]
> Boot with mem=1G (or boot your usual way, and do something to occupy
> most of the memory: I think /proc/sys/vm/nr_hugepages provides a great
> way to gobble up most of the memory, though it's not how I've done it).
>
> Make sure you have swap: 2G is more than enough. Copy the v4.5-rc5
> kernel source tree into a tmpfs: size=2G is more than enough.
> make defconfig there, then make -j20.
>
> On a v4.5-rc5 kernel that builds fine, on mmotm it is soon OOM-killed.
>
> Except that you'll probably need to fiddle around with that j20,
> it's true for my laptop but not for my workstation. j20 just happens
> to be what I've had there for years, that I now see breaking down
> (I can lower to j6 to proceed, perhaps could go a bit higher,
> but it still doesn't exercise swap very much).
I have tried to reproduce and failed in a virtual on my laptop. I
will try with another host with more CPUs (because my laptop has only
two). Just for the record I did: boot 1G machine in kvm, I have 2G swap
and reserve 800M for hugetlb pages (I got 445 of them). Then I extract
the kernel source to tmpfs (-o size=2G), make defconfig and make -j20
(16, 10 no difference really). I was also collecting vmstat in the
background. The compilation takes ages but the behavior seems consistent
and stable.
If I try 900M for huge pages then I get OOMs but this happens with the
mmotm without my oom rework patch set as well.
It would be great if you could retry and collect /proc/vmstat data
around the OOM time to see what compaction did? (I was using the
attached little program to reduce interference during OOM (no forks, the
code locked in and the resulting file preallocated - e.g.
read_vmstat 1s vmstat.log 10M and interrupt it by ctrl+c after the OOM
hits).
Thanks!
--
Michal Hocko
SUSE Labs
View attachment "read_vmstat.c" of type "text/x-csrc" (5026 bytes)
Powered by blists - more mailing lists