[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <520C3DD2.8010905@huawei.com>
Date: Thu, 15 Aug 2013 10:32:50 +0800
From: Xishi Qiu <qiuxishi@...wei.com>
To: Mel Gorman <mgorman@...e.de>, Minchan Kim <minchan@...nel.org>
CC: Andrew Morton <akpm@...ux-foundation.org>, <riel@...hat.com>,
<aquini@...hat.com>, <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>,
Xishi Qiu <qiuxishi@...wei.com>
Subject: Re: [PATCH] mm: skip the page buddy block instead of one page
On 2013/8/15 2:00, Mel Gorman wrote:
>>> Even if the page is still page buddy, there is no guarantee that it's
>>> the same page order as the first read. It could have be currently
>>> merging with adjacent buddies for example. There is also a really
>>> small race that a page was freed, allocated with some number stuffed
>>> into page->private and freed again before the second PageBuddy check.
>>> It's a bit of a hand grenade. How much of a performance benefit is there
>>
>> 1. Just worst case is skipping pageblock_nr_pages
>
> No, the worst case is that page_order returns a number that is
> completely garbage and low_pfn goes off the end of the zone
>
>> 2. Race is really small
>> 3. Higher order page allocation customer always have graceful fallback.
>>
Hi Minchan,
I think in this case, we may get the wrong value from page_order(page).
1. page is in page buddy
> if (PageBuddy(page)) {
2. someone allocated the page, and set page->private to another value
> int nr_pages = (1 << page_order(page)) - 1;
3. someone freed the page
> if (PageBuddy(page)) {
4. we will skip wrong pages
> nr_pages = min(nr_pages, MAX_ORDER_NR_PAGES - 1);
> low_pfn += nr_pages;
> continue;
> }
> }
>
> It's still race-prone meaning that it really should be backed by some
> performance data justifying it.
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists