[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Thu, 08 Sep 2011 11:05:52 -0700
From: Dave Hansen <dave@...ux.vnet.ibm.com>
To: Marek Szyprowski <m.szyprowski@...sung.com>
Cc: linux-kernel@...r.kernel.org, linux-arm-kernel@...ts.infradead.org,
linux-media@...r.kernel.org, linux-mm@...ck.org,
linaro-mm-sig@...ts.linaro.org,
Michal Nazarewicz <mina86@...a86.com>,
Kyungmin Park <kyungmin.park@...sung.com>,
Russell King <linux@....linux.org.uk>,
Andrew Morton <akpm@...ux-foundation.org>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
Ankita Garg <ankita@...ibm.com>,
Daniel Walker <dwalker@...eaurora.org>,
Mel Gorman <mel@....ul.ie>, Arnd Bergmann <arnd@...db.de>,
Jesse Barker <jesse.barker@...aro.org>,
Jonathan Corbet <corbet@....net>,
Shariq Hasnain <shariq.hasnain@...aro.org>,
Chunsang Jeong <chunsang.jeong@...aro.org>
Subject: Re: [PATCH 2/8] mm: alloc_contig_freed_pages() added
On Fri, 2011-08-19 at 16:27 +0200, Marek Szyprowski wrote:
> +unsigned long alloc_contig_freed_pages(unsigned long start, unsigned long end,
> + gfp_t flag)
> +{
> + unsigned long pfn = start, count;
> + struct page *page;
> + struct zone *zone;
> + int order;
> +
> + VM_BUG_ON(!pfn_valid(start));
> + zone = page_zone(pfn_to_page(start));
This implies that start->end are entirely contained in a single zone.
What enforces that? If some higher layer enforces that, I think we
probably need at least a VM_BUG_ON() in here and a comment about who
enforces it.
> + spin_lock_irq(&zone->lock);
> +
> + page = pfn_to_page(pfn);
> + for (;;) {
> + VM_BUG_ON(page_count(page) || !PageBuddy(page));
> + list_del(&page->lru);
> + order = page_order(page);
> + zone->free_area[order].nr_free--;
> + rmv_page_order(page);
> + __mod_zone_page_state(zone, NR_FREE_PAGES, -(1UL << order));
> + pfn += 1 << order;
> + if (pfn >= end)
> + break;
> + VM_BUG_ON(!pfn_valid(pfn));
> + page += 1 << order;
> + }
This 'struct page *'++ stuff is OK, but only for small, aligned areas.
For at least some of the sparsemem modes (non-VMEMMAP), you could walk
off of the end of the section_mem_map[] when you cross a MAX_ORDER
boundary. I'd feel a little bit more comfortable if pfn_to_page() was
being done each time, or only occasionally when you cross a section
boundary.
This may not apply to what ARM is doing today, but it shouldn't be too
difficult to fix up, or to document what's going on.
> + spin_unlock_irq(&zone->lock);
> +
> + /* After this, pages in the range can be freed one be one */
> + page = pfn_to_page(start);
> + for (count = pfn - start; count; --count, ++page)
> + prep_new_page(page, 0, flag);
> +
> + return pfn;
> +}
> +
> +void free_contig_pages(struct page *page, int nr_pages)
> +{
> + for (; nr_pages; --nr_pages, ++page)
> + __free_page(page);
> +}
The same thing about 'struct page' pointer math goes here.
-- Dave
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists