lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1500542656-23332-1-git-send-email-zhaoyang.huang@spreadtrum.com>
Date:   Thu, 20 Jul 2017 17:24:16 +0800
From:   Zhaoyang Huang <huangzhaoyang@...il.com>
To:     zhaoyang.huang@...eadtrum.com,
        Andrew Morton <akpm@...ux-foundation.org>,
        Michal Hocko <mhocko@...e.com>, Ingo Molnar <mingo@...nel.org>,
        zijun_hu <zijun_hu@....com>, Vlastimil Babka <vbabka@...e.cz>,
        Thomas Garnier <thgarnie@...gle.com>,
        "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
        Andrey Ryabinin <aryabinin@...tuozzo.com>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org, zijun_hu@...o.com
Subject: [PATCH v4] mm/vmalloc: terminate searching since one node found

It is no need to find the very beginning of the area within
alloc_vmap_area, which can be done by judging each node during the process

free_vmap_cache miss:
      vmap_area_root
          /      \
     tmp_next     U
        /  (T1)
      tmp
       /
     ...   (T2)
      /
    first

vmap_area_list->first->......->tmp->tmp_next->...->vmap_area_list
                  |-----(T3)----|

Under the scenario of free_vmap_cache miss, total time consumption of finding
the suitable hole is T = T1 + T2 + T3, while the commit decrease it to T1.

In fact, 'vmalloc' always start from the fix address(VMALLOC_START),which will
 cause the 'first' to be close to the begining of the list(vmap_area_list) and
 make T3 to be big.

The commit will especially help for a large and almost full vmalloc area.
Whearas, it would NOT affect current quick approach such as free_vmap_cache, for
it just take effect when free_vmap_cache miss and will reestablish it laterly.

Signed-off-by: Zhaoyang Huang <zhaoyang.huang@...eadtrum.com>
---
 mm/vmalloc.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 8698c1c..f58f445 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -471,9 +471,20 @@ static struct vmap_area *alloc_vmap_area(unsigned long size,
 
 		while (n) {
 			struct vmap_area *tmp;
+			struct vmap_area *tmp_next;
 			tmp = rb_entry(n, struct vmap_area, rb_node);
+			tmp_next = list_next_entry(tmp, list);
 			if (tmp->va_end >= addr) {
 				first = tmp;
+				if (ALIGN(tmp->va_end, align) + size
+						< tmp_next->va_start) {
+					/*
+					 * free_vmap_cache miss now,don't
+					 * update cached_hole_size here,
+					 * as __free_vmap_area does
+					 */
+					goto found;
+				}
 				if (tmp->va_start <= addr)
 					break;
 				n = n->rb_left;
-- 
1.9.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ