[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5408CB2E.3080101@sr71.net>
Date: Thu, 04 Sep 2014 13:27:26 -0700
From: Dave Hansen <dave@...1.net>
To: Michal Hocko <mhocko@...e.cz>
CC: Johannes Weiner <hannes@...xchg.org>,
Hugh Dickins <hughd@...gle.com>,
Dave Hansen <dave.hansen@...el.com>, Tejun Heo <tj@...nel.org>,
Linux-MM <linux-mm@...ck.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Vladimir Davydov <vdavydov@...allels.com>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: regression caused by cgroups optimization in 3.17-rc2
On 09/04/2014 07:27 AM, Michal Hocko wrote:
> Ouch. free_pages_and_swap_cache completely kills the uncharge batching
> because it reduces it to PAGEVEC_SIZE batches.
>
> I think we really do not need PAGEVEC_SIZE batching anymore. We are
> already batching on tlb_gather layer. That one is limited so I think
> the below should be safe but I have to think about this some more. There
> is a risk of prolonged lru_lock wait times but the number of pages is
> limited to 10k and the heavy work is done outside of the lock. If this
> is really a problem then we can tear LRU part and the actual
> freeing/uncharging into a separate functions in this path.
>
> Could you test with this half baked patch, please? I didn't get to test
> it myself unfortunately.
3.16 settled out at about 11.5M faults/sec before the regression. This
patch gets it back up to about 10.5M, which is good. The top spinlock
contention in the kernel is still from the resource counter code via
mem_cgroup_commit_charge(), though.
I'm running Johannes' patch now.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists