[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20090423160610.a093ddf0.akpm@linux-foundation.org>
Date: Thu, 23 Apr 2009 16:06:10 -0700
From: Andrew Morton <akpm@...ux-foundation.org>
To: Mel Gorman <mel@....ul.ie>
Cc: mel@....ul.ie, linux-mm@...ck.org, kosaki.motohiro@...fujitsu.com,
cl@...ux-foundation.org, npiggin@...e.de,
linux-kernel@...r.kernel.org, ming.m.lin@...el.com,
yanmin_zhang@...ux.intel.com, peterz@...radead.org,
penberg@...helsinki.fi
Subject: Re: [PATCH 19/22] Update NR_FREE_PAGES only as necessary
On Wed, 22 Apr 2009 14:53:24 +0100
Mel Gorman <mel@....ul.ie> wrote:
> When pages are being freed to the buddy allocator, the zone
> NR_FREE_PAGES counter must be updated. In the case of bulk per-cpu page
> freeing, it's updated once per page. This retouches cache lines more
> than necessary. Update the counters one per per-cpu bulk free.
>
> ...
>
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -460,7 +460,6 @@ static inline void __free_one_page(struct page *page,
> int migratetype)
> {
> unsigned long page_idx;
> - int order_size = 1 << order;
>
> if (unlikely(PageCompound(page)))
> if (unlikely(destroy_compound_page(page, order)))
> @@ -470,10 +469,9 @@ static inline void __free_one_page(struct page *page,
>
> page_idx = page_to_pfn(page) & ((1 << MAX_ORDER) - 1);
>
> - VM_BUG_ON(page_idx & (order_size - 1));
> + VM_BUG_ON(page_idx & ((1 << order) - 1));
> VM_BUG_ON(bad_range(zone, page));
>
<head spins>
Is this all a slow and obscure way of doing
VM_BUG_ON(order > MAX_ORDER);
?
If not, what _is_ it asserting?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists