[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100907093044.GR8384@csn.ul.ie>
Date: Tue, 7 Sep 2010 10:30:44 +0100
From: Mel Gorman <mel@....ul.ie>
To: Hiroyuki Kamezawa <kamezawa.hiroyuki@...il.com>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Michal Hocko <mhocko@...e.cz>, fengguang.wu@...el.com,
"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
andi.kleen@...el.com, Dave Hansen <dave@...ux.vnet.ibm.com>,
stable@...nel.org
Subject: Re: [BUGFIX][PATCH 1/3] memory hotplug: fix next block calculation
in is_removable
On Tue, Sep 07, 2010 at 02:15:01AM +0900, Hiroyuki Kamezawa wrote:
> 2010/9/6 Mel Gorman <mel@....ul.ie>:
> > On Mon, Sep 06, 2010 at 02:42:28PM +0900, KAMEZAWA Hiroyuki wrote:
> >>
> >> From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
> >>
> >> next_active_pageblock() is for finding next _used_ freeblock. It skips
> >> several blocks when it finds there are a chunk of free pages lager than
> >> pageblock. But it has 2 bugs.
> >>
> >> 1. We have no lock. page_order(page) - pageblock_order can be minus.
> >> 2. pageblocks_stride += is wrong. it should skip page_order(p) of pages.
> >>
> >> CC: stable@...nel.org
> >> Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
> >> ---
> >> mm/memory_hotplug.c | 16 ++++++++--------
> >> 1 file changed, 8 insertions(+), 8 deletions(-)
> >>
> >> Index: kametest/mm/memory_hotplug.c
> >> ===================================================================
> >> --- kametest.orig/mm/memory_hotplug.c
> >> +++ kametest/mm/memory_hotplug.c
> >> @@ -584,19 +584,19 @@ static inline int pageblock_free(struct
> >> /* Return the start of the next active pageblock after a given page */
> >> static struct page *next_active_pageblock(struct page *page)
> >> {
> >> - int pageblocks_stride;
> >> -
> >> /* Ensure the starting page is pageblock-aligned */
> >> BUG_ON(page_to_pfn(page) & (pageblock_nr_pages - 1));
> >>
> >> - /* Move forward by at least 1 * pageblock_nr_pages */
> >> - pageblocks_stride = 1;
> >> -
> >> /* If the entire pageblock is free, move to the end of free page */
> >> - if (pageblock_free(page))
> >> - pageblocks_stride += page_order(page) - pageblock_order;
> >> + if (pageblock_free(page)) {
> >> + int order;
> >> + /* be careful. we don't have locks, page_order can be changed.*/
> >> + order = page_order(page);
> >> + if (order > pageblock_order)
> >> + return page + (1 << order);
> >> + }
> >
> > As you note in your changelog, page_order() is unsafe because we do not have
> > the zone lock but you don't check if order is somewhere between pageblock_order
> > and MAX_ORDER_NR_PAGES. How is this safer?
> >
> Ah, I missed that.
>
> if ((pageblock_order <= order) && (order < MAX_ORDER))
> return page + (1 << order);
> ok ?
>
Seems ok. There will still be some false usage of order but it should be
harmless.
--
Mel Gorman
Part-time Phd Student Linux Technology Center
University of Limerick IBM Dublin Software Lab
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists