lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20170717070024.GC7397@dhcp22.suse.cz>
Date:   Mon, 17 Jul 2017 09:00:25 +0200
From:   Michal Hocko <mhocko@...nel.org>
To:     Zhaoyang Huang <huangzhaoyang@...il.com>
Cc:     zhaoyang.huang@...eadtrum.com,
        Andrew Morton <akpm@...ux-foundation.org>,
        Ingo Molnar <mingo@...nel.org>, zijun_hu <zijun_hu@....com>,
        Vlastimil Babka <vbabka@...e.cz>,
        Thomas Garnier <thgarnie@...gle.com>,
        "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
        Andrey Ryabinin <aryabinin@...tuozzo.com>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm/vmalloc: terminate searching since one node found

On Sun 16-07-17 15:28:27, Zhaoyang Huang wrote:
> It is no need to find the very beginning of the area within
> alloc_vmap_area, which can be done by judging each node during the process

Please describe _why_ the patch is needed. I suspect this is an
optimization but for which workloads it matters and how much.

> Signed-off-by: Zhaoyang Huang <zhaoyang.huang@...eadtrum.com>
> Signed-off-by: Zhaoyang Huang <huangzhaoyang@...il.com>

no need to to make your s-o-b twice. Just use the same one as the From
(author of the patch).

> ---
>  mm/vmalloc.c | 7 +++++++
>  1 file changed, 7 insertions(+)
> 
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index 34a1c3e..f833e07 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -459,9 +459,16 @@ static struct vmap_area *alloc_vmap_area(unsigned long size,
>  
>  		while (n) {
>  			struct vmap_area *tmp;
> +			struct vmap_area *tmp_next;
>  			tmp = rb_entry(n, struct vmap_area, rb_node);
> +			tmp_next = list_next_entry(tmp, list);
>  			if (tmp->va_end >= addr) {
>  				first = tmp;
> +				if (ALIGN(tmp->va_end, align) + size
> +						< tmp_next->va_start) {
> +					addr = ALIGN(tmp->va_end, align);
> +					goto found;
> +				}
>  				if (tmp->va_start <= addr)
>  					break;
>  				n = n->rb_left;
> -- 
> 1.9.1
> 

-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ