[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9174f087-3f6f-f0ed-6009-509d4436a47a@i-love.sakura.ne.jp>
Date: Fri, 12 Oct 2018 21:10:40 +0900
From: Tetsuo Handa <penguin-kernel@...ove.sakura.ne.jp>
To: Michal Hocko <mhocko@...nel.org>,
Johannes Weiner <hannes@...xchg.org>
Cc: linux-mm@...ck.org, syzkaller-bugs@...glegroups.com, guro@...com,
kirill.shutemov@...ux.intel.com, linux-kernel@...r.kernel.org,
rientjes@...gle.com, yang.s@...baba-inc.com,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [RFC PATCH] memcg, oom: throttle dump_header for memcg ooms
without eligible tasks
On 2018/10/12 21:08, Michal Hocko wrote:
>> So not more than 10 dumps in each 5s interval. That looks reasonable
>> to me. By the time it starts dropping data you have more than enough
>> information to go on already.
>
> Yeah. Unless we have a storm coming from many different cgroups in
> parallel. But even then we have the allocation context for each OOM so
> we are not losing everything. Should we ever tune this, it can be done
> later with some explicit examples.
>
>> Acked-by: Johannes Weiner <hannes@...xchg.org>
>
> Thanks! I will post the patch to Andrew early next week.
>
How do you handle environments where one dump takes e.g. 3 seconds?
Counting delay between first message in previous dump and first message
in next dump is not safe. Unless we count delay between last message
in previous dump and first message in next dump, we cannot guarantee
that the system won't lockup due to printk() flooding.
Powered by blists - more mailing lists