[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130815193122.GA19536@thunk.org>
Date: Thu, 15 Aug 2013 15:31:22 -0400
From: Theodore Ts'o <tytso@....edu>
To: Dave Hansen <dave.hansen@...ux.intel.com>
Cc: linux-fsdevel@...r.kernel.org, xfs@....sgi.com,
linux-ext4@...r.kernel.org, Jan Kara <jack@...e.cz>,
LKML <linux-kernel@...r.kernel.org>, david@...morbit.com,
Tim Chen <tim.c.chen@...ux.intel.com>,
Andi Kleen <ak@...ux.intel.com>,
Andy Lutomirski <luto@...capital.net>
Subject: Re: page fault scalability (ext3, ext4, xfs)
On Thu, Aug 15, 2013 at 10:45:09AM -0700, Dave Hansen wrote:
>
> I _believe_ this is because the block allocation is occurring during the
> warmup, even in those numbers I posted previously. will-it-scale forks
> things off early and the tests spend most of their time in those while
> loops. Each "page fault handled" (the y-axis) is a trip through the
> while loop, *not* a call to testcase().
Ah, OK. Sorry, I misinterpreted what was going on.
So basically, what we have going on in the test is (a) we're bumping
i_version and/or mtime, and (b) the munmap() implies an msync(), so
writeback is happening in the background concurrently with the write
page faults, and we may be (actually, almost certainly) seeing some
interference between the writeback and the page_mkwrite operations.
That implies that if you redid the test using a ramdisk, which will
significantly speed up the writeback and overhead caused by the
journal transactions for the metadata updates, the results might very
well be different.
Cheers,
- Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists