lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20160106131755.GB13900@dhcp22.suse.cz>
Date:	Wed, 6 Jan 2016 14:17:56 +0100
From:	Michal Hocko <mhocko@...nel.org>
To:	Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>
Cc:	akpm@...ux-foundation.org, mgorman@...e.de, rientjes@...gle.com,
	torvalds@...ux-foundation.org, oleg@...hat.com, hughd@...gle.com,
	andrea@...nel.org, riel@...hat.com, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org
Subject: Re: [RFC][PATCH] sysrq: ensure manual invocation of the OOM
 killerunder OOM livelock

On Wed 06-01-16 20:49:23, Tetsuo Handa wrote:
> Michal Hocko wrote:
> > On Tue 05-01-16 17:22:46, Michal Hocko wrote:
> > > On Wed 30-12-15 15:33:47, Tetsuo Handa wrote:
> > [...]
> > > > I wish for a kernel thread that does OOM-kill operation.
> > > > Maybe we can change the OOM reaper kernel thread to do it.
> > > > What do you think?
> > > 
> > > I do no think a separate kernel thread would help much if the
> > > allocations have to keep looping in the allocator. oom_reaper is a
> > > separate kernel thread only due to locking required for the exit_mmap
> > > path.
> > 
> > Let me clarify what I've meant here. What you actually want is to do
> > select_bad_process and oom_kill_process (including oom_reap_vmas) in
> > the kernel thread context, right?
> 
> Right.

It still seems we were not on the same page. I thought you wanted to
make _all_ oom killer handling to be done from the kernel thread while
you only cared about the sysrq+f case. Your patch below sounds like a
reasonable compromise to me. It conflates two different things together
but they are not that different in principle so I guess this could be
acceptable. Maybe s@..._reaper@...nc_oom_killer@ would be more
appropriate to reflect that fact.

[...]

> While testing above patch, I once hit depletion of memory reserves.
[...]
> Complete log is at http://I-love.SAKURA.ne.jp/tmp/serial-20160106.txt.xz .
> 
> I don't think this depletion was caused by above patch because the last
> invocation was not SysRq-f.

Yes I agree this is not related to the patch.

> I believe we should add a workaround for
> the worst case now. It is impossible to add it after we made the code
> more and more difficult to test.
> 
> >                               We would have to handle queuing of the
> > oom requests because multiple oom killers might be active in different
> > allocation domains (cpusets, memcgs) so I am not so sure this would be a
> > great win in the end. But I haven't tried to do it so I might be wrong
> > and it will turn up being much more easier than I expect.
> 
> I could not catch what you want to say.

I was contemplating about all the OOM killer handling from within the
kernel thread as that was my understanding of what you were proposing.

> If you are worrying about failing
> to call oom_reap_vmas() for second victim due to invoking the OOM killer
> again before mm_to_reap is updated from first victim to NULL, we can walk
> on the process list.
[...]

Thanks!
-- 
Michal Hocko
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ