lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LSU.2.11.1704051331420.4288@eggly.anvils>
Date:   Wed, 5 Apr 2017 13:59:49 -0700 (PDT)
From:   Hugh Dickins <hughd@...gle.com>
To:     Mel Gorman <mgorman@...hsingularity.net>
cc:     Andrew Morton <akpm@...ux-foundation.org>,
        Tejun Heo <tj@...nel.org>, linux-kernel@...r.kernel.org,
        linux-mm@...ck.org
Subject: Is it safe for kthreadd to drain_all_pages?

Hi Mel,

I suspect that it's not safe for kthreadd to drain_all_pages();
but I haven't studied flush_work() etc, so don't really know what
I'm talking about: hoping that you will jump to a realization.

4.11-rc has been giving me hangs after hours of swapping load.  At
first they looked like memory leaks ("fork: Cannot allocate memory");
but for no good reason I happened to do "cat /proc/sys/vm/stat_refresh"
before looking at /proc/meminfo one time, and the stat_refresh stuck
in D state, waiting for completion of flush_work like many kworkers.
kthreadd waiting for completion of flush_work in drain_all_pages().

But I only noticed that pattern later: originally tried to bisect
rc1 before rc2 came out, but underestimated how long to wait before
deciding a stage good - I thought 12 hours, but would now say 2 days.
Too late for bisection, I suspect your drain_all_pages() changes.

(I've also found order:0 page allocation stalls in /var/log/messages,
148804ms a nice example: which suggest that these hangs are perhaps a
condition it can sometimes get out of itself.  None with the patch.)

Patch below has been running well for 36 hours now:
a bit too early to be sure, but I think it's time to turn to you.


[PATCH] mm: don't let kthreadd drain_all_pages

4.11-rc has been giving me hangs after many hours of swapping load: most
kworkers waiting for completion of a flush_work, kthreadd waiting for
completion of flush_work in drain_all_pages (while doing copy_process).
I suspect that kthreadd should not be allowed to drain_all_pages().

Signed-off-by: Hugh Dickins <hughd@...gle.com>
---

 mm/page_alloc.c |    2 ++
 1 file changed, 2 insertions(+)

--- 4.11-rc5/mm/page_alloc.c	2017-03-13 09:08:37.743209168 -0700
+++ linux/mm/page_alloc.c	2017-04-04 00:33:44.086867413 -0700
@@ -2376,6 +2376,8 @@ void drain_all_pages(struct zone *zone)
 	/* Workqueues cannot recurse */
 	if (current->flags & PF_WQ_WORKER)
 		return;
+	if (current == kthreadd_task)
+		return;
 
 	/*
 	 * Do not drain if one is already in progress unless it's specific to

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ