[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140610115254.GA25631@dhcp22.suse.cz>
Date: Tue, 10 Jun 2014 13:52:54 +0200
From: Michal Hocko <mhocko@...e.cz>
To: Marian Marinov <mm@...com>
Cc: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Johannes Weiner <hannes@...xchg.org>,
KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
Tejun Heo <tj@...nel.org>, linux-mm@...ck.org
Subject: Re: [RFC] oom, memcg: handle sysctl oom_kill_allocating_task while
memcg oom happening
[More people to CC]
On Tue 10-06-14 14:35:02, Marian Marinov wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> Hello,
Hi,
> a while back in 2012 there was a request for this functionality.
> oom, memcg: handle sysctl oom_kill_allocating_task while memcg oom
> happening
>
> This is the thread: https://lkml.org/lkml/2012/10/16/168
>
> Now we run a several machines with around 10k processes on each
> machine, using containers.
>
> Regularly we see OOM from within a container that causes performance
> degradation.
What kind of performance degradation and which parts of the system are
affected?
memcg oom killer happens outside of any locks currently so the only
bottleneck I can see is the per-cgroup container which iterates all
tasks in the group. Is this what is going on here?
> We are running 3.12.20 with the following OOM configuration and memcg
> oom enabled:
>
> vm.oom_dump_tasks = 0
> vm.oom_kill_allocating_task = 1
> vm.panic_on_oom = 0
>
> When OOM occurs we see very high numbers for the loadavg and the
> overall responsiveness of the machine degrades.
What is the system waiting for?
> During these OOM states the load of the machine gradualy increases
> from 25 up to 120 in the interval of 10minutes.
>
> Once we manually bring down the memory usage of a container(killing
> some tasks) the load drops down to 25 within 5 to 7 minutes.
So the OOM killer is not able to find a victim to kill?
> I read the whole thread from 2012 but I do not see the expected
> behavior that is described by the people that commented the issue.
Why do you think that killing the allocating task would be helpful in
your case?
> In this case, with real usage for this patch, would it be considered
> for inclusion?
I would still prefer to fix the real issue which is not clear from your
description yet.
--
Michal Hocko
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists