[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20120912122758.ad15e10f.akpm@linux-foundation.org>
Date: Wed, 12 Sep 2012 12:27:58 -0700
From: Andrew Morton <akpm@...ux-foundation.org>
To: Tim Chen <tim.c.chen@...ux.intel.com>
Cc: Mel Gorman <mel@....ul.ie>, Minchan Kim <minchan@...nel.org>,
Johannes Weiner <hannes@...xchg.org>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
Andrea Arcangeli <aarcange@...hat.com>,
David Rientjes <rientjes@...gle.com>,
Michal Hocko <mhocko@...e.cz>,
Xiao Guangrong <xiaoguangrong@...ux.vnet.ibm.com>,
Paul Gortmaker <paul.gortmaker@...driver.com>,
Matthew Wilcox <willy@...ux.intel.com>,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
Andi Kleen <ak@...ux.intel.com>, linux-mm@...ck.org,
linux-kernel <linux-kernel@...r.kernel.org>,
Alex Shi <alex.shi@...el.com>,
Fengguang Wu <fengguang.wu@...el.com>
Subject: Re: [PATCH 0/3 v2] mm: Batch page reclamation under shink_page_list
On Mon, 10 Sep 2012 09:19:20 -0700
Tim Chen <tim.c.chen@...ux.intel.com> wrote:
> This is the second version of the patch series. Thanks to Matthew Wilcox
> for many valuable suggestions on improving the patches.
>
> To do page reclamation in shrink_page_list function, there are two
> locks taken on a page by page basis. One is the tree lock protecting
> the radix tree of the page mapping and the other is the
> mapping->i_mmap_mutex protecting the mapped
> pages. I try to batch the operations on pages sharing the same lock
> to reduce lock contentions. The first patch batch the operations protected by
> tree lock while the second and third patch batch the operations protected by
> the i_mmap_mutex.
>
> I managed to get 14% throughput improvement when with a workload putting
> heavy pressure of page cache by reading many large mmaped files
> simultaneously on a 8 socket Westmere server.
That sounds good, although more details on the performance changes
would be appreciated - after all, that's the entire point of the
patchset.
And we shouldn't only test for improvements - we should also test for
degradation. What workloads might be harmed by this change? I'd suggest
- a single process which opens N files and reads one page from each
one, then repeats. So there are no contiguous LRU pages which share
the same ->mapping. Get some page reclaim happening, measure the
impact.
- The batching means that we now do multiple passes over pageframes
where we used to do things in a single pass. Walking all those new
page lists will be expensive if they are lengthy enough to cause L1
cache evictions.
What would be a test for this? A simple, single-threaded walk
through a file, I guess?
Mel's review comments were useful, thanks.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists