lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 18 Mar 2008 15:42:09 +0000
From:	Mel Gorman <mel@....ul.ie>
To:	Andi Kleen <andi@...stfloor.org>
Cc:	linux-kernel@...r.kernel.org, pj@....com, linux-mm@...ck.org,
	nickpiggin@...oo.com.au
Subject: Re: [PATCH] [7/18] Abstract out the NUMA node round robin code into a separate function

On (17/03/08 02:58), Andi Kleen didst pronounce:
> Need this as a separate function for a future patch.
> 
> No behaviour change.
> 
> Signed-off-by: Andi Kleen <ak@...e.de>

Maybe if you moved this beside patch 1, they could both be tested in
isolation as a fairly reasonable cleanup that does not alter
functionality? Not a big deal.

> 
> ---
>  mm/hugetlb.c |   37 ++++++++++++++++++++++---------------
>  1 file changed, 22 insertions(+), 15 deletions(-)
> 
> Index: linux/mm/hugetlb.c
> ===================================================================
> --- linux.orig/mm/hugetlb.c
> +++ linux/mm/hugetlb.c
> @@ -219,6 +219,27 @@ static struct page *alloc_fresh_huge_pag
>  	return page;
>  }
>  
> +/*
> + * Use a helper variable to find the next node and then
> + * copy it back to hugetlb_next_nid afterwards:
> + * otherwise there's a window in which a racer might
> + * pass invalid nid MAX_NUMNODES to alloc_pages_node.
> + * But we don't need to use a spin_lock here: it really
> + * doesn't matter if occasionally a racer chooses the
> + * same nid as we do.  Move nid forward in the mask even
> + * if we just successfully allocated a hugepage so that
> + * the next caller gets hugepages on the next node.
> + */
> +static int huge_next_node(struct hstate *h)
> +{
> +	int next_nid;
> +	next_nid = next_node(h->hugetlb_next_nid, node_online_map);
> +	if (next_nid == MAX_NUMNODES)
> +		next_nid = first_node(node_online_map);
> +	h->hugetlb_next_nid = next_nid;
> +	return next_nid;
> +}
> +
>  static int alloc_fresh_huge_page(struct hstate *h)
>  {
>  	struct page *page;
> @@ -232,21 +253,7 @@ static int alloc_fresh_huge_page(struct 
>  		page = alloc_fresh_huge_page_node(h, h->hugetlb_next_nid);
>  		if (page)
>  			ret = 1;
> -		/*
> -		 * Use a helper variable to find the next node and then
> -		 * copy it back to hugetlb_next_nid afterwards:
> -		 * otherwise there's a window in which a racer might
> -		 * pass invalid nid MAX_NUMNODES to alloc_pages_node.
> -		 * But we don't need to use a spin_lock here: it really
> -		 * doesn't matter if occasionally a racer chooses the
> -		 * same nid as we do.  Move nid forward in the mask even
> -		 * if we just successfully allocated a hugepage so that
> -		 * the next caller gets hugepages on the next node.
> -		 */
> -		next_nid = next_node(h->hugetlb_next_nid, node_online_map);
> -		if (next_nid == MAX_NUMNODES)
> -			next_nid = first_node(node_online_map);
> -		h->hugetlb_next_nid = next_nid;
> +		next_nid = huge_next_node(h);

hmm, I'm not seeing where next_nid gets declared locally here as it
should have been removed in an earlier patch. Maybe it's reintroduced
later but if you do reshuffle the patchset so that the cleanups can be
merged on their own, it'll show up in a compile test.

>  	} while (!page && h->hugetlb_next_nid != start_nid);
>  
>  	return ret;
> 

Other than the possible gotcha with next_nid declared locally, the move
seems fine.

Acked-by: Mel Gorman <mel@....ul.ie>

-- 
Mel Gorman
Part-time Phd Student                          Linux Technology Center
University of Limerick                         IBM Dublin Software Lab
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ