[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180116212614.gudglzw7kwzd3get@suse.de>
Date: Tue, 16 Jan 2018 21:26:14 +0000
From: Mel Gorman <mgorman@...e.de>
To: Henry Willard <henry.willard@...cle.com>
Cc: akpm@...ux-foundation.org, kstewart@...uxfoundation.org,
zi.yan@...rutgers.edu, pombredanne@...b.com, aarcange@...hat.com,
gregkh@...uxfoundation.org, aneesh.kumar@...ux.vnet.ibm.com,
kirill.shutemov@...ux.intel.com, jglisse@...hat.com,
linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm: numa: Do not trap faults on shared data section
pages.
On Tue, Jan 16, 2018 at 11:28:44AM -0800, Henry Willard wrote:
> Workloads consisting of a large number processes running the same program
> with a large shared data section may suffer from excessive numa balancing
> page migration of the pages in the shared data section. This shows up as
> high I/O wait time and degraded performance on machines with higher socket
> or node counts.
>
> This patch skips shared copy-on-write pages in change_pte_range() for the
> numa balancing case.
>
> Signed-off-by: Henry Willard <henry.willard@...cle.com>
> Reviewed-by: HÃ¥kon Bugge <haakon.bugge@...cle.com>
> Reviewed-by: Steve Sistare steven.sistare@...cle.com
Merge the leader and this mail together. It would have been nice to see
data on other realistic workloads as well.
My main source of discomfort is the fact that this is permanent as two
processes perfectly isolated but with a suitably shared COW mapping
will never migrate the data. A potential improvement to get the reported
bandwidth up in the test program would be to skip the rest of the VMA if
page_mapcount != 1 in a COW mapping as it would be reasonable to assume
the remaining pages in the VMA are also affected and the scan is wasteful.
There are counter-examples to this but I suspect that the full VMA being
shared is the common case. Whether you do that or not;
Acked-by: Mel Gorman <mgorman@...e.de>
--
Mel Gorman
SUSE Labs
Powered by blists - more mailing lists