lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 16 Jan 2018 11:28:44 -0800
From:   Henry Willard <henry.willard@...cle.com>
To:     akpm@...ux-foundation.org
Cc:     mgorman@...e.de, kstewart@...uxfoundation.org,
        zi.yan@...rutgers.edu, pombredanne@...b.com, aarcange@...hat.com,
        gregkh@...uxfoundation.org, aneesh.kumar@...ux.vnet.ibm.com,
        kirill.shutemov@...ux.intel.com, jglisse@...hat.com,
        linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: [PATCH] mm: numa: Do not trap faults on shared data section pages.

Workloads consisting of a large number processes running the same program
with a large shared data section may suffer from excessive numa balancing
page migration of the pages in the shared data section. This shows up as
high I/O wait time and degraded performance on machines with higher socket
or node counts.

This patch skips shared copy-on-write pages in change_pte_range() for the
numa balancing case.

Signed-off-by: Henry Willard <henry.willard@...cle.com>
Reviewed-by: HÃ¥kon Bugge <haakon.bugge@...cle.com>
Reviewed-by: Steve Sistare steven.sistare@...cle.com
---
 mm/mprotect.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/mm/mprotect.c b/mm/mprotect.c
index ec39f730a0bf..fbbb3ab70818 100644
--- a/mm/mprotect.c
+++ b/mm/mprotect.c
@@ -84,6 +84,11 @@ static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
 				if (!page || PageKsm(page))
 					continue;
 
+				/* Also skip shared copy-on-write pages */
+				if (is_cow_mapping(vma->vm_flags) &&
+				    page_mapcount(page) != 1)
+					continue;
+
 				/* Avoid TLB flush if possible */
 				if (pte_protnone(oldpte))
 					continue;
-- 
1.8.3.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ