[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20091015105332.GB3127@wotan.suse.de>
Date: Thu, 15 Oct 2009 12:53:32 +0200
From: Nick Piggin <npiggin@...e.de>
To: Anton Blanchard <anton@...ba.org>
Cc: Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
linux-fsdevel@...r.kernel.org,
Ravikiran G Thirumalai <kiran@...lex86.org>,
Peter Zijlstra <peterz@...radead.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Jens Axboe <axboe@...nel.dk>
Subject: Re: Latest vfs scalability patch
On Thu, Oct 15, 2009 at 09:08:54PM +1100, Anton Blanchard wrote:
>
> Hi Nick,
>
> > Several people have been interested to test my vfs patches, so rather
> > than resend patches I have uploaded a rollup against Linus's current
> > head.
> >
> > ftp://ftp.kernel.org/pub/linux/kernel/people/npiggin/patches/fs-scale/
> >
> > I have used ext2,ext3,autofs4,nfs as well as in-memory filesystems
> > OK (although this doesn't mean there are no bugs!). Otherwise, if your
> > filesystem compiles, then there is a reasonable chance of it working,
> > or ask me and I can try updating it for the new locking.
> >
> > I would be interested in seeing any numbers people might come up with,
> > including single-threaded performance.
>
> Thanks for doing a rollup patch, it made it easy to test. I gave it a spin on
> a 64 core (128 thread) POWER5+ box. I started simple by looking at open/close
> performance, eg:
I wonder what other good performance tests you can add to your test
framework? creat/unlink is another easy one. And for each case, putting
threads in their own cwd versus a common cwd are the variants.
BTW. for these cases in your tests it will be nice if you can run on
ramfs because that will isolate purely the vfs. Perhaps also include
other filesystems as you get time, but I think ramfs is the most
useful for us to start with.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists