lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100906133914.GL8384@csn.ul.ie>
Date:	Mon, 6 Sep 2010 14:39:15 +0100
From:	Mel Gorman <mel@....ul.ie>
To:	KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
Cc:	"linux-mm@...ck.org" <linux-mm@...ck.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	Michal Hocko <mhocko@...e.cz>, fengguang.wu@...el.com,
	"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
	andi.kleen@...el.com, Dave Hansen <dave@...ux.vnet.ibm.com>,
	stable@...nel.org
Subject: Re: [BUGFIX][PATCH 1/3] memory hotplug: fix next block calculation
	in is_removable

On Mon, Sep 06, 2010 at 02:42:28PM +0900, KAMEZAWA Hiroyuki wrote:
> 
> From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
> 
> next_active_pageblock() is for finding next _used_ freeblock. It skips
> several blocks when it finds there are a chunk of free pages lager than
> pageblock. But it has 2 bugs.
> 
>   1. We have no lock. page_order(page) - pageblock_order can be minus.
>   2. pageblocks_stride += is wrong. it should skip page_order(p) of pages.
> 
> CC: stable@...nel.org
> Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
> ---
>  mm/memory_hotplug.c |   16 ++++++++--------
>  1 file changed, 8 insertions(+), 8 deletions(-)
> 
> Index: kametest/mm/memory_hotplug.c
> ===================================================================
> --- kametest.orig/mm/memory_hotplug.c
> +++ kametest/mm/memory_hotplug.c
> @@ -584,19 +584,19 @@ static inline int pageblock_free(struct 
>  /* Return the start of the next active pageblock after a given page */
>  static struct page *next_active_pageblock(struct page *page)
>  {
> -	int pageblocks_stride;
> -
>  	/* Ensure the starting page is pageblock-aligned */
>  	BUG_ON(page_to_pfn(page) & (pageblock_nr_pages - 1));
>  
> -	/* Move forward by at least 1 * pageblock_nr_pages */
> -	pageblocks_stride = 1;
> -
>  	/* If the entire pageblock is free, move to the end of free page */
> -	if (pageblock_free(page))
> -		pageblocks_stride += page_order(page) - pageblock_order;
> +	if (pageblock_free(page)) {
> +		int order;
> +		/* be careful. we don't have locks, page_order can be changed.*/
> +		order = page_order(page);
> +		if (order > pageblock_order)
> +			return page + (1 << order);
> +	}

As you note in your changelog, page_order() is unsafe because we do not have
the zone lock but you don't check if order is somewhere between pageblock_order
and MAX_ORDER_NR_PAGES. How is this safer?

>  
> -	return page + (pageblocks_stride * pageblock_nr_pages);
> +	return page + pageblock_nr_pages;
>  }
>  
>  /* Checks if this range of memory is likely to be hot-removable. */
> 

-- 
Mel Gorman
Part-time Phd Student                          Linux Technology Center
University of Limerick                         IBM Dublin Software Lab
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ