[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1324808519.29243.8.camel@hakkenden.homenet>
Date: Sun, 25 Dec 2011 14:21:59 +0400
From: "Nikolay S." <nowhere@...kenden.ath.cx>
To: Hillf Danton <dhillf@...il.com>
Cc: Dave Chinner <david@...morbit.com>, Michal Hocko <mhocko@...e.cz>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: Kswapd in 3.2.0-rc5 is a CPU hog
В Вс., 25/12/2011 в 17:09 +0800, Hillf Danton пишет:
> On Sat, Dec 24, 2011 at 4:45 AM, Dave Chinner <david@...morbit.com> wrote:
> [...]
> >
> > Ok, it's not a shrink_slab() problem - it's just being called ~100uS
> > by kswapd. The pattern is:
> >
> > - reclaim 94 (batches of 32,32,30) pages from iinactive list
> > of zone 1, node 0, prio 12
> > - call shrink_slab
> > - scan all caches
> > - all shrinkers return 0 saying nothing to shrink
> > - 40us gap
> > - reclaim 10-30 pages from inactive list of zone 2, node 0, prio 12
> > - call shrink_slab
> > - scan all caches
> > - all shrinkers return 0 saying nothing to shrink
> > - 40us gap
> > - isolate 9 pages from LRU zone ?, node ?, none isolated, none freed
> > - isolate 22 pages from LRU zone ?, node ?, none isolated, none freed
> > - call shrink_slab
> > - scan all caches
> > - all shrinkers return 0 saying nothing to shrink
> > 40us gap
> >
> > And it just repeats over and over again. After a while, nid=0,zone=1
> > drops out of the traces, so reclaim only comes in batches of 10-30
> > pages from zone 2 between each shrink_slab() call.
> >
> > The trace starts at 111209.881s, with 944776 pages on the LRUs. It
> > finishes at 111216.1 with kswapd going to sleep on node 0 with
> > 930067 pages on the LRU. So 7 seconds to free 15,000 pages (call it
> > 2,000 pages/s) which is awfully slow....
> >
> Hi all,
>
> In hope, the added debug info is helpful.
>
> Hillf
> ---
>
> --- a/mm/memcontrol.c Fri Dec 9 21:57:40 2011
> +++ b/mm/memcontrol.c Sun Dec 25 17:08:14 2011
> @@ -1038,7 +1038,11 @@ void mem_cgroup_lru_del_list(struct page
> memcg = root_mem_cgroup;
> mz = page_cgroup_zoneinfo(memcg, page);
> /* huge page split is done under lru_lock. so, we have no races. */
> - MEM_CGROUP_ZSTAT(mz, lru) -= 1 << compound_order(page);
> + if (WARN_ON_ONCE(MEM_CGROUP_ZSTAT(mz, lru) <
> + (1 << compound_order(page))))
> + MEM_CGROUP_ZSTAT(mz, lru) = 0;
> + else
> + MEM_CGROUP_ZSTAT(mz, lru) -= 1 << compound_order(page);
> }
>
> void mem_cgroup_lru_del(struct page *page)
Hello,
Uhm.., is this patch against 3.2-rc4? I can not apply it. There's no
mem_cgroup_lru_del_list(), but void mem_cgroup_del_lru_list(). Should I
place changes there?
And also, -rc7 is here. May the problem be addressed as part of some
ongoing work? Is there any reason to try -rc7 (the problem requires
several days of uptime to become obvious)?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists