lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 16 Jan 2018 16:45:22 -0800
From:   Henry Willard <henry.willard@...cle.com>
To:     Mel Gorman <mgorman@...e.de>
Cc:     akpm@...ux-foundation.org, kstewart@...uxfoundation.org,
        zi.yan@...rutgers.edu, pombredanne@...b.com, aarcange@...hat.com,
        gregkh@...uxfoundation.org, aneesh.kumar@...ux.vnet.ibm.com,
        kirill.shutemov@...ux.intel.com, jglisse@...hat.com,
        linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm: numa: Do not trap faults on shared data section
 pages.



> On Jan 16, 2018, at 1:26 PM, Mel Gorman <mgorman@...e.de> wrote:
> 
> On Tue, Jan 16, 2018 at 11:28:44AM -0800, Henry Willard wrote:
>> Workloads consisting of a large number processes running the same program
>> with a large shared data section may suffer from excessive numa balancing
>> page migration of the pages in the shared data section. This shows up as
>> high I/O wait time and degraded performance on machines with higher socket
>> or node counts.
>> 
>> This patch skips shared copy-on-write pages in change_pte_range() for the
>> numa balancing case.
>> 
>> Signed-off-by: Henry Willard <henry.willard@...cle.com>
>> Reviewed-by: HÃ¥kon Bugge <haakon.bugge@...cle.com>
>> Reviewed-by: Steve Sistare steven.sistare@...cle.com
> 
> Merge the leader and this mail together. It would have been nice to see
> data on other realistic workloads as well.
> 
> My main source of discomfort is the fact that this is permanent as two
> processes perfectly isolated but with a suitably shared COW mapping
> will never migrate the data. A potential improvement to get the reported
> bandwidth up in the test program would be to skip the rest of the VMA if
> page_mapcount != 1 in a COW mapping as it would be reasonable to assume
> the remaining pages in the VMA are also affected and the scan is wasteful.
> There are counter-examples to this but I suspect that the full VMA being
> shared is the common case. Whether you do that or not;
> 
> Acked-by: Mel Gorman <mgorman@...e.de>

Thanks. The real customer cases where this was observed involved large, 1TB or more, eight socket machines running very active RDBMS workloads. These customers saw high iowait times and a loss in performance when numa balancing was enabled. Previously there was no reported iowait time. The extent of the loss of performance was variable depending on the activity and never quantified. The little test program is a distillation of what was observed. In the real workload, a large part of the VMA is shared, but not all of it, so this seemed the simplest and most reliable patch.

Henry

> 
> -- 
> Mel Gorman
> SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ