[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALYGNiPVcbzJxxwcUVfCOKA56xyV+r8B7DjU4aVhSqAXor2w7Q@mail.gmail.com>
Date: Wed, 25 Feb 2015 01:16:49 +0300
From: Konstantin Khlebnikov <koct9i@...il.com>
To: Johannes Weiner <hannes@...xchg.org>
Cc: Michal Hocko <mhocko@...e.cz>,
Andrew Morton <akpm@...ux-foundation.org>,
David Rientjes <rientjes@...gle.com>,
"\\Rafael J. Wysocki\\" <rjw@...ysocki.net>,
Tetsuo Handa <penguin-kernel@...ove.sakura.ne.jp>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] mm, oom: do not fail __GFP_NOFAIL allocation if oom
killer is disbaled
On Wed, Feb 25, 2015 at 1:09 AM, Konstantin Khlebnikov <koct9i@...il.com> wrote:
> On Tue, Feb 24, 2015 at 10:11 PM, Johannes Weiner <hannes@...xchg.org> wrote:
>> On Tue, Feb 24, 2015 at 07:19:24PM +0100, Michal Hocko wrote:
>>> Tetsuo Handa has pointed out that __GFP_NOFAIL allocations might fail
>>> after OOM killer is disabled if the allocation is performed by a
>>> kernel thread. This behavior was introduced from the very beginning by
>>> 7f33d49a2ed5 (mm, PM/Freezer: Disable OOM killer when tasks are frozen).
>>> This means that the basic contract for the allocation request is broken
>>> and the context requesting such an allocation might blow up unexpectedly.
>>>
>>> There are basically two ways forward.
>>> 1) move oom_killer_disable after kernel threads are frozen. This has a
>>> risk that the OOM victim wouldn't be able to finish because it would
>>> depend on an already frozen kernel thread. This would be really
>>> tricky to debug.
>>> 2) do not fail GFP_NOFAIL allocation no matter what and risk a potential
>>> Freezable kernel threads will loop and fail the suspend. Incidental
>>> allocations after kernel threads are frozen will at least dump a
>>> warning - if we are lucky and the serial console is still active of
>>> course...
>>>
>>> This patch implements the later option because it is safer. We would see
>>> warnings rather than allocation failures for the kernel threads which
>>> would blow up otherwise and have a higher chances to identify
>>> __GFP_NOFAIL users from deeper pm code.
>>>
>>> Signed-off-by: Michal Hocko <mhocko@...e.cz>
>>> ---
>>>
>>> We haven't seen any bug reports
>>>
>>> mm/oom_kill.c | 8 ++++++++
>>> 1 file changed, 8 insertions(+)
>>>
>>> diff --git a/mm/oom_kill.c b/mm/oom_kill.c
>>> index 642f38cb175a..ea8b443cd871 100644
>>> --- a/mm/oom_kill.c
>>> +++ b/mm/oom_kill.c
>>> @@ -772,6 +772,10 @@ out:
>>> schedule_timeout_killable(1);
>>> }
>>>
>>> +static DEFINE_RATELIMIT_STATE(oom_disabled_rs,
>>> + DEFAULT_RATELIMIT_INTERVAL,
>>> + DEFAULT_RATELIMIT_BURST);
>>> +
>>> /**
>>> * out_of_memory - tries to invoke OOM killer.
>>> * @zonelist: zonelist pointer
>>> @@ -792,6 +796,10 @@ bool out_of_memory(struct zonelist *zonelist, gfp_t gfp_mask,
>>> if (!oom_killer_disabled) {
>>> __out_of_memory(zonelist, gfp_mask, order, nodemask, force_kill);
>>> ret = true;
>>> + } else if (gfp_mask & __GFP_NOFAIL) {
>>> + if (__ratelimit(&oom_disabled_rs))
>>> + WARN(1, "Unable to make forward progress for __GFP_NOFAIL because OOM killer is disbaled\n");
>>> + ret = true;
>>
>> I'm fine with keeping the allocation looping, but is that message
>> helpful? It seems completely useless to the user encountering it. Is
>> it going to help kernel developers when we get a bug report with it?
>>
>> WARN_ON_ONCE()?
>
> maybe panic() ?
>
> If somebody turns off oom-killer it seems he's pretty sure that he has
> enough memory.
Ah, that's used in freeze/suspend code. I thought that some kind of
sysctl for brave sysadmins.
>
>>
>> --
>> To unsubscribe, send a message with 'unsubscribe linux-mm' in
>> the body to majordomo@...ck.org. For more info on Linux MM,
>> see: http://www.linux-mm.org/ .
>> Don't email: <a href=mailto:"dont@...ck.org"> email@...ck.org </a>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists