[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1240943671.938.575.camel@calx>
Date: Tue, 28 Apr 2009 13:34:31 -0500
From: Matt Mackall <mpm@...enic.com>
To: Tony Luck <tony.luck@...il.com>
Cc: Wu Fengguang <fengguang.wu@...el.com>, Ingo Molnar <mingo@...e.hu>,
Steven Rostedt <rostedt@...dmis.org>,
Frédéric Weisbecker <fweisbec@...il.com>,
Larry Woodman <lwoodman@...hat.com>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Pekka Enberg <penberg@...helsinki.fi>,
Eduard - Gabriel Munteanu <eduard.munteanu@...ux360.ro>,
Andrew Morton <akpm@...ux-foundation.org>,
LKML <linux-kernel@...r.kernel.org>,
KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
Andi Kleen <andi@...stfloor.org>,
Alexey Dobriyan <adobriyan@...il.com>,
"linux-mm@...ck.org" <linux-mm@...ck.org>
Subject: Re: [PATCH 5/5] proc: export more page flags in /proc/kpageflags
On Tue, 2009-04-28 at 11:11 -0700, Tony Luck wrote:
> On Tue, Apr 28, 2009 at 1:33 AM, Wu Fengguang <fengguang.wu@...el.com> wrote:
> > 1) FAST
> >
> > It takes merely 0.2s to scan 4GB pages:
> >
> > ./page-types 0.02s user 0.20s system 99% cpu 0.216 total
>
> OK on a tiny system ... but sounds painful on a big
> server. 0.2s for 4G scales up to 3 minutes 25 seconds
> on a 4TB system (4TB systems were being sold two
> years ago ... so by now the high end will have moved
> up to 8TB or perhaps 16TB).
>
> Would the resulting output be anything but noise on
> a big system (a *lot* of pages can change state in
> 3 minutes)?
Bah. The rate of change is proportional to #cpus, not #pages. Assuming
you've got 1024 processors, you could run the scan in parallel in .2
seconds still.
It won't be an atomic snapshot, obviously. But stopping the whole
machine on a system that size is probably not what you want anyway.
--
http://selenic.com : development and support for Mercurial and Linux
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists