lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <1476540769-31893-1-git-send-email-zhouxianrong@huawei.com>
Date:   Sat, 15 Oct 2016 22:12:48 +0800
From:   <zhouxianrong@...wei.com>
To:     <linux-mm@...ck.org>
CC:     <linux-kernel@...r.kernel.org>, <akpm@...ux-foundation.org>,
        <rientjes@...gle.com>, <hannes@...xchg.org>,
        <chris@...is-wilson.co.uk>, <vdavydov.dev@...il.com>,
        <mgorman@...hsingularity.net>, <joe@...ches.com>,
        <shawn.lin@...k-chips.com>, <iamjoonsoo.kim@....com>,
        <kuleshovmail@...il.com>, <zhouxianrong@...wei.com>,
        <zhouxiyu@...wei.com>, <zhangshiming5@...wei.com>,
        <won.ho.park@...wei.com>, <tuxiaobing@...wei.com>
Subject: [PATCH vmalloc] reduce purge_lock range and hold time of

From: z00281421 <z00281421@...esmail.huawei.com>

i think no need to place __free_vmap_area loop in purge_lock;
_free_vmap_area could be non-atomic operations with flushing tlb
but must be done after flush tlb. and the whole__free_vmap_area loops
also could be non-atomic operations. if so we could improve realtime
because the loop times sometimes is larg and spend a few time.

Signed-off-by: z00281421 <z00281421@...esmail.huawei.com>
---
 mm/vmalloc.c |   14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 91f44e7..9d9154d 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -661,13 +661,23 @@ static void __purge_vmap_area_lazy(unsigned long *start, unsigned long *end,
 	if (nr || force_flush)
 		flush_tlb_kernel_range(*start, *end);
 
+	spin_unlock(&purge_lock);
+
 	if (nr) {
+		/* the batch count should not be too small
+		** because if vmalloc space is few free is first than alloc.
+		*/
+		unsigned char batch = -1;
 		spin_lock(&vmap_area_lock);
-		llist_for_each_entry_safe(va, n_va, valist, purge_list)
+		llist_for_each_entry_safe(va, n_va, valist, purge_list) {
 			__free_vmap_area(va);
+			if (!batch--) {
+				spin_unlock(&vmap_area_lock);
+				spin_lock(&vmap_area_lock);
+			}
+		}
 		spin_unlock(&vmap_area_lock);
 	}
-	spin_unlock(&purge_lock);
 }
 
 /*
-- 
1.7.9.5

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ