[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <57AA061B.2050002@intel.com>
Date: Tue, 9 Aug 2016 09:34:35 -0700
From: Dave Hansen <dave.hansen@...el.com>
To: "Huang, Ying" <ying.huang@...el.com>,
Andrew Morton <akpm@...ux-foundation.org>
Cc: tim.c.chen@...el.com, andi.kleen@...el.com, aaron.lu@...el.com,
linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Hugh Dickins <hughd@...gle.com>, Shaohua Li <shli@...nel.org>,
Minchan Kim <minchan@...nel.org>,
Rik van Riel <riel@...hat.com>,
Mel Gorman <mgorman@...hsingularity.net>,
Tejun Heo <tj@...nel.org>,
Wu Fengguang <fengguang.wu@...el.com>
Subject: Re: [RFC] mm: Don't use radix tree writeback tags for pages in swap
cache
On 08/09/2016 09:17 AM, Huang, Ying wrote:
> File pages uses a set of radix tags (DIRTY, TOWRITE, WRITEBACK) to
> accelerate finding the pages with the specific tag in the the radix tree
> during writing back an inode. But for anonymous pages in swap cache,
> there are no inode based writeback. So there is no need to find the
> pages with some writeback tags in the radix tree. It is no necessary to
> touch radix tree writeback tags for pages in swap cache.
Seems simple enough. Do we do any of this unnecessary work for the
other radix tree tags? If so, maybe we should just fix this once and
for all. Could we, for instance, WARN_ONCE() in radix_tree_tag_set() if
it sees a swap mapping get handed in there?
In any case, I think the new !PageSwapCache(page) check either needs
commenting, or a common helper for the two sites that you can comment.
> With this patch, the swap out bandwidth improved 22.3% in vm-scalability
> swap-w-seq test case with 8 processes on a Xeon E5 v3 system, because of
> reduced contention on swap cache radix tree lock. To test sequence swap
> out, the test case uses 8 processes sequentially allocate and write to
> anonymous pages until RAM and part of the swap device is used up.
What was the swap device here, btw? What is the actual bandwidth
increase you are seeing? Is it 1MB/s -> 1.223MB/s? :)
Powered by blists - more mailing lists