lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 28 Apr 2015 15:55:35 +0200
From:	Michal Hocko <mhocko@...e.cz>
To:	Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>
Cc:	hannes@...xchg.org, akpm@...ux-foundation.org, aarcange@...hat.com,
	david@...morbit.com, rientjes@...gle.com, vbabka@...e.cz,
	linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 0/9] mm: improve OOM mechanism v2

On Tue 28-04-15 19:34:47, Tetsuo Handa wrote:
[...]
> [PATCH 8/9] makes the speed of allocating __GFP_FS pages extremely slow (5
> seconds / page) because out_of_memory() serialized by the oom_lock sleeps for
> 5 seconds before returning true when the OOM victim got stuck. This throttling
> also slows down !__GFP_FS allocations when there is a thread doing a __GFP_FS
> allocation, for __alloc_pages_may_oom() is serialized by the oom_lock
> regardless of gfp_mask.

This is indeed unnecessary.

> How long will the OOM victim is blocked when the
> allocating task needs to allocate e.g. 1000 !__GFP_FS pages before allowing
> the OOM victim waiting at mutex_lock(&inode->i_mutex) to continue? It will be
> a too-long-to-wait stall which is effectively a deadlock for users. I think
> we should not sleep with the oom_lock held.

I do not see why sleeping with oom_lock would be a problem. It simply
doesn't make much sense to try to trigger OOM killer when there is/are
OOM victims still exiting.

> Also, allowing any !fatal_signal_pending() threads doing __GFP_FS allocations
> (e.g. malloc() + memset()) to dip into the reserves will deplete them when the
> OOM victim is blocked for a thread doing a !__GFP_FS allocation, for
> [PATCH 9/9] does not allow !test_thread_flag(TIF_MEMDIE) threads doing
> !__GFP_FS allocations to access the reserves. Of course, updating [PATCH 9/9]
> like
> 
> -+     if (*did_some_progress)
> -+          alloc_flags |= ALLOC_NO_WATERMARKS;
>   out:
> ++     if (*did_some_progress)
> ++          alloc_flags |= ALLOC_NO_WATERMARKS;
>        mutex_unlock(&oom_lock);
> 
> (which means use of "no watermark" without invoking the OOM killer) is
> obviously wrong. I think we should not allow __GFP_FS allocations to
> access to the reserves when the OOM victim is blocked.
> 
> By the way, I came up with an idea (incomplete patch on top of patches up to
> 7/9 is shown below) while trying to avoid sleeping with the oom_lock held.
> This patch is meant for
> 
>   (1) blocking_notifier_call_chain(&oom_notify_list) is called after
>       the OOM killer is disabled in order to increase possibility of
>       memory allocation to succeed.

How do you guarantee that the notifier doesn't wake up any process and
break the oom_disable guarantee?

>   (2) oom_kill_process() can determine when to kill next OOM victim.
> 
>   (3) oom_scan_process_thread() can take TIF_MEMDIE timeout into
>       account when choosing an OOM victim.

You have heard my opinions about this and I do not plan to repeat them
here again.

[...]
-- 
Michal Hocko
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ