[<prev] [next>] [day] [month] [year] [list]
Message-ID: <1345251990.13492.233.camel@schen9-DESK>
Date: Fri, 17 Aug 2012 18:06:30 -0700
From: Tim Chen <tim.c.chen@...ux.intel.com>
To: Mel Gorman <mel@....ul.ie>,
Andrew Morton <akpm@...ux-foundation.org>,
Minchan Kim <minchan@...nel.org>,
Johannes Weiner <hannes@...xchg.org>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
Cc: Matthew Wilcox <willy@...ux.intel.com>,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
Andi Kleen <ak@...ux.intel.com>, linux-mm@...ck.org,
linux-kernel <linux-kernel@...r.kernel.org>,
Alex Shi <alex.shi@...el.com>
Subject: [RFC PATCH 0/2] mm: Batch page reclamation under shink_page_list
To do page reclamation in shrink_page_list function, there are two
locks taken on a page by page basis. One is the tree lock protecting
the radix tree of the page mapping and the other is the
mapping->i_mmap_mutex protecting the reverse mapping of file maped
pages. I tried to batch the operations on pages sharing the same lock
to reduce lock contentions. The first patch batch the operations under
tree lock while the second one batch the checking of file page
references under the i_mmap_mutex.
I managed to get 14% throughput improvement when with a workload putting
heavy pressure of page cache by reading many large mmaped files
simultaneously on a 8 socket Westmere server.
There are some ugly hacks in the patches to pass information about
whether the i_mmap_mutex is locked. Any suggestions on a better
approach and reviews of the patches are appreciated.
Tim
Signed-off-by: Tim Chen <tim.c.chen@...ux.intel.com>
---
Diffstat
include/linux/rmap.h | 6 +-
mm/memory-failure.c | 2 +-
mm/migrate.c | 4 +-
mm/rmap.c | 28 ++++++----
mm/vmscan.c | 139 +++++++++++++++++++++++++++++++++++++++++++++-----
5 files changed, 147 insertions(+), 32 deletions(-)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists