lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Fri, 17 Feb 2017 10:37:44 +0800 From: "Huang\, Ying" <ying.huang@...el.com> To: "Huang\, Ying" <ying.huang@...el.com> Cc: Hugh Dickins <hughd@...gle.com>, Tim Chen <tim.c.chen@...ux.intel.com>, Minchan Kim <minchan@...nel.org>, Andrew Morton <akpm@...ux-foundation.org>, <linux-kernel@...r.kernel.org>, <linux-mm@...ck.org> Subject: Re: swap_cluster_info lockdep splat "Huang, Ying" <ying.huang@...el.com> writes: > Hi, Hugh, > > Hugh Dickins <hughd@...gle.com> writes: > >> On Thu, 16 Feb 2017, Tim Chen wrote: >>> >>> > I do not understand your zest for putting wrappers around every little >>> > thing, making it all harder to follow than it need be. Here's the patch >>> > I've been running with (but you have a leak somewhere, and I don't have >>> > time to search out and fix it: please try sustained swapping and swapoff). >>> > >>> >>> Hugh, trying to duplicate your test case. So you were doing swapping, >>> then swap off, swap on the swap device and restart swapping? >> >> Repeated pair of make -j20 kernel builds in 700M RAM, 1.5G swap on SSD, >> 8 cpus; one of the builds in tmpfs, other in ext4 on loop on tmpfs file; >> sizes tuned for plenty of swapping but no OOMing (it's an ancient 2.6.24 >> kernel I build, modern one needing a lot more space with a lot less in use). >> >> How much of that is relevant I don't know: hopefully none of it, it's >> hard to get the tunings right from scratch. To answer your specific >> question: yes, I'm not doing concurrent swapoffs in this test showing >> the leak, just waiting for each of the pair of builds to complete, >> then tearing down the trees, doing swapoff followed by swapon, and >> starting a new pair of builds. >> >> Sometimes it's the swapoff that fails with ENOMEM, more often it's a >> fork during build that fails with ENOMEM: after 6 or 7 hours of load >> (but timings show it getting slower leading up to that). /proc/meminfo >> did not give me an immediate clue, Slab didn't look surprising but >> I may not have studied close enough. > > Thanks for you information! > > Memory newly allocated in the mm-swap series are allocated via vmalloc, > could you find anything special for vmalloc in /proc/meminfo? I found a potential issue in the mm-swap series, could you try the patches as below? Best Regards, Huang, Ying -----------------------------------------------------> >From 943494339bd5bc321b8f36f286bc143ac437719b Mon Sep 17 00:00:00 2001 From: Huang Ying <ying.huang@...el.com> Date: Fri, 17 Feb 2017 10:31:37 +0800 Subject: [PATCH] Debug memory leak --- mm/swap_state.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/swap_state.c b/mm/swap_state.c index 2126e9ba23b2..473b71e052a8 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -333,7 +333,7 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, * else swap_off will be aborted if we return NULL. */ if (!__swp_swapcount(entry) && swap_slot_cache_enabled) - return NULL; + break; /* * Get a new page to read into from swap. -- 2.11.0
Powered by blists - more mailing lists