[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.10.1607151447490.121215@chino.kir.corp.google.com>
Date: Fri, 15 Jul 2016 14:58:20 -0700 (PDT)
From: David Rientjes <rientjes@...gle.com>
To: Mikulas Patocka <mpatocka@...hat.com>
cc: Michal Hocko <mhocko@...nel.org>,
Tetsuo Handa <penguin-kernel@...ove.sakura.ne.jp>,
Ondrej Kozina <okozina@...hat.com>,
Jerome Marchand <jmarchan@...hat.com>,
Stanislav Kozina <skozina@...hat.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, dm-devel@...hat.com
Subject: Re: System freezes after OOM
On Fri, 15 Jul 2016, Mikulas Patocka wrote:
> And what about the oom reaper? It should have freed all victim's pages
> even if the victim is looping in mempool_alloc. Why the oom reaper didn't
> free up memory?
>
Is that possible with mlock or shared memory? Nope. The oom killer does
not have the benefit of selecting a process to kill that will likely free
the most memory or reap the most memory, the choice is configurable by the
user.
> > guarantee that elements would be returned in a completely livelocked
> > kernel in 4.7 or earlier kernels, that would not have been the case. I
>
> And what kind of targets do you use in device mapper in the configuration
> that livelocked? Do you use some custom google-developed drivers?
>
> Please describe the whole stack of block I/O devices when this livelock
> happened.
>
> Most device mapper drivers can really make forward progress when they are
> out of memory, so I'm interested what kind of configuration do you have.
>
Kworkers are processing writeback, ext4_writepages() relies on kmem that
is reclaiming memory itself through kmem_getpages() and they are waiting
on the oom victim to exit so they endlessly loop in the page allocator
themselves. Same situation with __alloc_skb() so we can intermittently
lose access to hundreds of the machines over the network. There are no
custom drivers required for this to happen, the stack trace has already
been posted of the livelock victim and this can happen for anything in
filemap_fault() that has TIF_MEMDIE set.
> > frankly don't care about your patch reviewing of dm mempool usage when
> > dm_request() livelocked our kernel.
>
> If it livelocked, it is a bug in some underlying block driver, not a bug
> in mempool_alloc.
>
Lol, the interface is quite clear and can be modified to allow mempool
users to set __GFP_NOMEMALLOC on their mempool_alloc() request if they can
guarantee elements will be returned to the freelist in all situations,
including system oom situations. We may revert that ourselves if our
machines time out once we use a post-4.7 kernel and report that as
necessary.
Powered by blists - more mailing lists