lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJd=RBDa4LT1gbh6zPx+bzoOtSUeX=puJe6DVC-WyKoF4nw-dg@mail.gmail.com>
Date:	Sun, 25 Dec 2011 17:09:43 +0800
From:	Hillf Danton <dhillf@...il.com>
To:	Dave Chinner <david@...morbit.com>
Cc:	nowhere <nowhere@...kenden.ath.cx>, Michal Hocko <mhocko@...e.cz>,
	linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: Kswapd in 3.2.0-rc5 is a CPU hog

On Sat, Dec 24, 2011 at 4:45 AM, Dave Chinner <david@...morbit.com> wrote:
[...]
>
> Ok, it's not a shrink_slab() problem - it's just being called ~100uS
> by kswapd. The pattern is:
>
>        - reclaim 94 (batches of 32,32,30) pages from iinactive list
>          of zone 1, node 0, prio 12
>        - call shrink_slab
>                - scan all caches
>                - all shrinkers return 0 saying nothing to shrink
>        - 40us gap
>        - reclaim 10-30 pages from inactive list of zone 2, node 0, prio 12
>        - call shrink_slab
>                - scan all caches
>                - all shrinkers return 0 saying nothing to shrink
>        - 40us gap
>        - isolate 9 pages from LRU zone ?, node ?, none isolated, none freed
>        - isolate 22 pages from LRU zone ?, node ?, none isolated, none freed
>        - call shrink_slab
>                - scan all caches
>                - all shrinkers return 0 saying nothing to shrink
>        40us gap
>
> And it just repeats over and over again. After a while, nid=0,zone=1
> drops out of the traces, so reclaim only comes in batches of 10-30
> pages from zone 2 between each shrink_slab() call.
>
> The trace starts at 111209.881s, with 944776 pages on the LRUs. It
> finishes at 111216.1 with kswapd going to sleep on node 0 with
> 930067 pages on the LRU. So 7 seconds to free 15,000 pages (call it
> 2,000 pages/s) which is awfully slow....
>
Hi all,

In hope, the added debug info is helpful.

Hillf
---

--- a/mm/memcontrol.c	Fri Dec  9 21:57:40 2011
+++ b/mm/memcontrol.c	Sun Dec 25 17:08:14 2011
@@ -1038,7 +1038,11 @@ void mem_cgroup_lru_del_list(struct page
 		memcg = root_mem_cgroup;
 	mz = page_cgroup_zoneinfo(memcg, page);
 	/* huge page split is done under lru_lock. so, we have no races. */
-	MEM_CGROUP_ZSTAT(mz, lru) -= 1 << compound_order(page);
+	if (WARN_ON_ONCE(MEM_CGROUP_ZSTAT(mz, lru) <
+				(1 << compound_order(page))))
+		MEM_CGROUP_ZSTAT(mz, lru) = 0;
+	else
+		MEM_CGROUP_ZSTAT(mz, lru) -= 1 << compound_order(page);
 }

 void mem_cgroup_lru_del(struct page *page)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ