lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <201504291448.GDH51070.OOOFMFVHLStQFJ@I-love.SAKURA.ne.jp>
Date:	Wed, 29 Apr 2015 14:48:21 +0900
From:	Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>
To:	rientjes@...gle.com, hannes@...xchg.org
Cc:	akpm@...ux-foundation.org, mhocko@...e.cz, aarcange@...hat.com,
	david@...morbit.com, vbabka@...e.cz, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH 6/9] mm: oom_kill: simplify OOM killer locking

David Rientjes wrote:
> It's not vital and somewhat unrelated to your patch, but if we can't grab 
> the mutex with the trylock in __alloc_pages_may_oom() then I think it 
> would be more correct to do schedule_timeout_killable() rather than 
> uninterruptible.  I just mention it if you happen to go through another 
> revision of the series and want to switch it at the same time.

It is a difficult choice. Killable sleep is a good thing if

  (1) the OOM victim is current thread
  (2) the OOM victim is waiting for current thread to release lock

but is a bad thing otherwise. And currently, (2) is not true because current
thread cannot access the memory reserves when current thread is blocking the
OOM victim. If fatal_signal_pending() threads can access portion of the memory
reserves (like I said

  I don't like allowing only TIF_MEMDIE to get reserve access, for it can be
  one of !TIF_MEMDIE threads which really need memory to safely terminate without
  failing allocations from do_exit(). Rather, why not to discontinue TIF_MEMDIE
  handling and allow getting access to private memory reserves for all
  fatal_signal_pending() threads (i.e. replacing WMARK_OOM with WMARK_KILLED
  in "[patch 09/12] mm: page_alloc: private memory reserves for OOM-killing
  allocations") ?

at https://lkml.org/lkml/2015/3/27/378 ), (2) will become true.

Of course, the threads which the OOM victim is waiting for may not have
SIGKILL pending. WMARK_KILLED helps if the lock contention is happening
among threads sharing the same mm struct, does not help otherwise.

Well, what about introducing WMARK_OOM as a memory reserve which can be
accessed during atomic_read(&oom_victims) > 0? In this way, we can choose
next OOM victim upon reaching WMARK_OOM.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ