lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 24 Nov 2015 15:44:48 -0800
From:	Andrew Morton <akpm@...ux-foundation.org>
To:	Michal Hocko <mhocko@...nel.org>
Cc:	Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>,
	Tejun Heo <tj@...nel.org>,
	Cristopher Lameter <clameter@....com>,
	Arkadiusz Miƛkiewicz <arekm@...en.pl>,
	<linux-mm@...ck.org>, LKML <linux-kernel@...r.kernel.org>,
	Michal Hocko <mhocko@...e.com>, Joonsoo Kim <js1304@...il.com>,
	Christoph Lameter <cl@...ux.com>
Subject: Re: [PATCH] mm, vmstat: Allow WQ concurrency to discover memory
 reclaim doesn't make any progress

On Thu, 19 Nov 2015 13:30:53 +0100 Michal Hocko <mhocko@...nel.org> wrote:

> From: Michal Hocko <mhocko@...e.com>
> 
> Tetsuo Handa has reported that the system might basically livelock in OOM
> condition without triggering the OOM killer. The issue is caused by
> internal dependency of the direct reclaim on vmstat counter updates (via
> zone_reclaimable) which are performed from the workqueue context.
> If all the current workers get assigned to an allocation request,
> though, they will be looping inside the allocator trying to reclaim
> memory but zone_reclaimable can see stalled numbers so it will consider
> a zone reclaimable even though it has been scanned way too much. WQ
> concurrency logic will not consider this situation as a congested workqueue
> because it relies that worker would have to sleep in such a situation.
> This also means that it doesn't try to spawn new workers or invoke
> the rescuer thread if the one is assigned to the queue.
> 
> In order to fix this issue we need to do two things. First we have to
> let wq concurrency code know that we are in trouble so we have to do
> a short sleep. In order to prevent from issues handled by 0e093d99763e
> ("writeback: do not sleep on the congestion queue if there are no
> congested BDIs or if significant congestion is not being encountered in
> the current zone") we limit the sleep only to worker threads which are
> the ones of the interest anyway.
> 
> The second thing to do is to create a dedicated workqueue for vmstat and
> mark it WQ_MEM_RECLAIM to note it participates in the reclaim and to
> have a spare worker thread for it.

This vmstat update thing is being a problem.  Please see Joonsoo's
"mm/vmstat: retrieve more accurate vmstat value".

Joonsoo, might this patch help with that issue?

> 
> The original issue reported by Tetsuo [1] has seen multiple attempts for
> a fix. The easiest one being [2] which was targeted to the particular
> problem. There was a more general concern that looping inside the
> allocator without ever sleeping breaks the basic assumption of worker
> concurrency logic so the fix should be more general. Another attempt [3]
> therefore added a short (1 jiffy) sleep into the page allocator. This
> would, however, introduce sleeping for all callers of the page allocator
> which is not really needed. This patch tries to be a compromise and
> introduce sleeping only where it matters - for kworkers.
> 
> Even though we haven't seen bug reports in the past I would suggest
> backporting this to the stable trees. The issue is present since we have
> stopped useing congestion_wait in the retry loop because WQ concurrency
> is older as well as vmstat worqueue based refresh AFAICS.

hm, I'm reluctant.  If the patch fixes something that real people are
really hurting from then yes.  But I suspect this is just one fly-swat
amongst many.


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ