lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 05 Jun 2014 23:58:49 +0200
From:	Richard Weinberger <richard@....at>
To:	Michal Hocko <mhocko@...e.cz>
CC:	hannes@...xchg.org, bsingharora@...il.com,
	kamezawa.hiroyu@...fujitsu.com, akpm@...ux-foundation.org,
	vdavydov@...allels.com, tj@...nel.org, handai.szj@...bao.com,
	rientjes@...gle.com, oleg@...hat.com, rusty@...tcorp.com.au,
	kirill.shutemov@...ux.intel.com, linux-kernel@...r.kernel.org,
	cgroups@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [RFC][PATCH] oom: Be less verbose if the oom_control event fd
 has listeners

Am 05.06.2014 18:18, schrieb Michal Hocko:
> On Thu 05-06-14 17:55:54, Richard Weinberger wrote:
>> Am 05.06.2014 17:00, schrieb Michal Hocko:
>>> On Thu 05-06-14 16:00:41, Richard Weinberger wrote:
>>>> Don't spam the kernel logs if the oom_control event fd has listeners.
>>>> In this case there is no need to print that much lines as user space
>>>> will anyway notice that the memory cgroup has reached its limit.
>>>
>>> But how do you debug why it is reaching the limit and why a particular
>>> process has been killed?
>>
>> In my case it's always because customer's Java application gone nuts.
>> So I don't really have to debug a lot. ;-)
>> But I can understand your point.
> 
> If you know that handling memcg-OOM condition is easy then maybe you can
> not only listen for the OOM notifications but also handle OOM conditions
> and kill the offender. This would mean that kernel doesn't try to kill
> anything and so wouldn't dump anything to the log.

Basically I don't care what customers run in their containers.
But almost every OOM is because their Java apps consume too much memory.
Mostly because they don't know exactly how much memory they need or
because of completely broken JVM heap settings.

All my OOM listener does is sending a mail a la "Your container ran out of memory, go figure...".

>>> If we are printing too much then OK, let's remove those parts which are
>>> not that useful but hiding information which tells us more about the oom
>>> decision doesn't sound right to me.
>>
>> What about adding a sysctl like "vm.oom_verbose"?
>> By default it would be 1.
>> If set to 0 the full OOM information is only printed out if nobody listens
>> to the event fd.
> 
> If we have a knob then I guess it should be global and shared by memcg
> as well. I can imagine that somebody might be interested only in the
> tasks dump, while somebody would like to see LRU states and other memory
> counters. So it would be ideally a bitmask of things to output. I do not
> think that a memcg specific solution is good, though.

I'm not sure if such a fine grained setting is really useful.

Thanks,
//richard
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ