[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1236357616.3882.66.camel@pc1117.cambridge.arm.com>
Date: Fri, 06 Mar 2009 16:40:16 +0000
From: Catalin Marinas <catalin.marinas@....com>
To: Dave Hansen <dave@...ux.vnet.ibm.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
jan sonnek <ha2nny@...il.com>, linux-kernel@...r.kernel.org,
viro@...iv.linux.org.uk, Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Andy Whitcroft <apw@...dowen.org>
Subject: Re: Regression - locking (all from 2.6.28)
On Wed, 2009-03-04 at 16:54 -0800, Dave Hansen wrote:
> On Tue, 2009-03-03 at 15:01 +0000, Catalin Marinas wrote:
> > > + /* mem_map scanning */
> > > + for_each_online_node(i) {
> > > + struct page *page, *end;
> > > +
> > > + page = NODE_MEM_MAP(i);
> > > + end = page + NODE_DATA(i)->node_spanned_pages;
> > > +
> > > + scan_block(page, end, NULL);
> > > + }
[...]
> One completely unoptimized thing you can do which will scan a 'struct
> page' at a time is this:
>
> for_each_online_node(i) {
> unsigned long pfn;
> for (pfn = node_start_pfn(i); pfn < node_end_pfn(i); pfn++) {
> struct page *page;
> if (!pfn_valid(pfn))
> continue;
> page = pfn_to_page(pfn);
> scan_block(page, page+1, NULL);
> }
> }
It seems that node_start_pfn() isn't present on all the architectures. I
ended up with something like below:
+ /* struct page scanning for each node */
+ for_each_online_node(i) {
+ pg_data_t *pgdat = NODE_DATA(i);
+ unsigned long start_pfn = pgdat->node_start_pfn;
+ unsigned long end_pfn = start_pfn +
+ pgdat->node_spanned_pages - 1;
+ unsigned long pfn;
+
+ for (pfn = start_pfn; pfn < end_pfn; pfn++) {
+ struct page *page;
+
+ if (!pfn_valid(pfn))
+ continue;
+ page = pfn_to_page(pfn);
+ /* only scan if page is in use */
+ if (page_count(page) == 0)
+ continue;
+ scan_block(page, page + 1, NULL);
+ }
+ }
Are the pgdat->node_start_pfn and pgdat->node_spanned_pages always
valid? Thanks.
--
Catalin
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists