[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20130531183855.44DDF928@viggo.jf.intel.com>
Date: Fri, 31 May 2013 11:38:55 -0700
From: Dave Hansen <dave@...1.net>
To: linux-mm@...ck.org
Cc: linux-kernel@...r.kernel.org, akpm@...ux-foundation.org,
mgorman@...e.de, tim.c.chen@...ux.intel.com,
Dave Hansen <dave@...1.net>
Subject: [v4][PATCH 0/6] mm: vmscan: Batch page reclamation under shink_page_list
These are an update of Tim Chen's earlier work:
http://lkml.kernel.org/r/1347293960.9977.70.camel@schen9-DESK
I broke the patches up a bit more, and tried to incorporate some
changes based on some feedback from Mel and Andrew.
Changes for v4:
* generated on top of linux-next-20130530, plus Mel's vmscan
fixes:
http://lkml.kernel.org/r/1369659778-6772-2-git-send-email-mgorman@suse.de
* added some proper vmscan/swap: prefixes to the subjects
Changes for v3:
* Add batch draining before congestion_wait()
* minor merge conflicts with Mel's vmscan work
Changes for v2:
* use page_mapping() accessor instead of direct access
to page->mapping (could cause crashes when running in
to swap cache pages.
* group the batch function's introduction patch with
its first use
* rename a few functions as suggested by Mel
* Ran some single-threaded tests to look for regressions
caused by the batching. If there is overhead, it is only
in the worst-case scenarios, and then only in hundreths of
a percent of CPU time.
If you're curious how effective the batching is, I have a quick
and dirty patch to keep some stats:
https://www.sr71.net/~dave/intel/rmb-stats-only.patch
--
To do page reclamation in shrink_page_list function, there are
two locks taken on a page by page basis. One is the tree lock
protecting the radix tree of the page mapping and the other is
the mapping->i_mmap_mutex protecting the mapped pages. This set
deals only with mapping->tree_lock.
Tim managed to get 14% throughput improvement when with a workload
putting heavy pressure of page cache by reading many large mmaped
files simultaneously on a 8 socket Westmere server.
I've been testing these by running large parallel kernel compiles
on systems that are under memory pressure. During development,
I caught quite a few races on smaller setups, and it's being
quite stable that large (160 logical CPU / 1TB) system.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists