lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 7 Oct 2016 11:16:26 +0200
From:   Michal Hocko <mhocko@...nel.org>
To:     Minchan Kim <minchan@...nel.org>
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        Mel Gorman <mgorman@...hsingularity.net>,
        Vlastimil Babka <vbabka@...e.cz>,
        Joonsoo Kim <iamjoonsoo.kim@....com>,
        linux-kernel@...r.kernel.org, linux-mm@...ck.org,
        Sangseok Lee <sangseok.lee@....com>
Subject: Re: [PATCH 0/4] use up highorder free pages before OOM

On Fri 07-10-16 14:45:32, Minchan Kim wrote:
> I got OOM report from production team with v4.4 kernel.
> It has enough free memory but failed to allocate order-0 page and
> finally encounter OOM kill.
> I could reproduce it with my test easily. Look at below.
> The reason is free pages(19M) of DMA32 zone are reserved for
> HIGHORDERATOMIC and doesn't unreserved before the OOM.

Is this really reproducible?

[...]
> active_anon:383949 inactive_anon:106724 isolated_anon:0
>  active_file:15 inactive_file:44 isolated_file:0
>  unevictable:0 dirty:0 writeback:24 unstable:0
>  slab_reclaimable:2483 slab_unreclaimable:3326
>  mapped:0 shmem:0 pagetables:1906 bounce:0
>  free:6898 free_pcp:291 free_cma:0
[...]
> Free swap  = 8kB
> Total swap = 255996kB
> 524158 pages RAM
> 0 pages HighMem/MovableOnly
> 12658 pages reserved
> 0 pages cma reserved
> 0 pages hwpoisoned

>From the above you can see that you are pretty much out of memory. There
is basically no pagecache to reclaim and your anon memory is not 
reclaimable either because the swap is basically full. It is true that 
the high atomic reserves consume 19MB which could be reused but this 
less than 1%, especially when you compare that to the amount of reserved
memory.

So while I do agree that potential issues - misaccounting and others you
are addressing in the follow up patch - are good to fix but I believe that
draining last 19M is not something that would reliably get you over the
edge. Your workload (93% of memory sitting on anon LRU with swap full)
simply doesn't fit into the amount of memory you have available.
-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ