[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20101116030242.GI22876@dastard>
Date: Tue, 16 Nov 2010 14:02:43 +1100
From: Dave Chinner <david@...morbit.com>
To: Nick Piggin <npiggin@...nel.dk>
Cc: Nick Piggin <npiggin@...il.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Eric Dumazet <eric.dumazet@...il.com>,
Al Viro <viro@...iv.linux.org.uk>,
linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org
Subject: Re: [patch 1/6] fs: icache RCU free inodes
On Mon, Nov 15, 2010 at 03:21:00PM +1100, Nick Piggin wrote:
> On Mon, Nov 15, 2010 at 12:00:27PM +1100, Dave Chinner wrote:
> > On Fri, Nov 12, 2010 at 12:24:21PM +1100, Nick Piggin wrote:
> > > On Wed, Nov 10, 2010 at 9:05 AM, Nick Piggin <npiggin@...nel.dk> wrote:
> > > > On Tue, Nov 09, 2010 at 09:08:17AM -0800, Linus Torvalds wrote:
> > > >> On Tue, Nov 9, 2010 at 8:21 AM, Eric Dumazet <eric.dumazet@...il.com> wrote:
> > > >> >
> > > >> > You can see problems using this fancy thing :
> > > >> >
> > > >> > - Need to use slab ctor() to not overwrite some sensitive fields of
> > > >> > reused inodes.
> > > >> > (spinlock, next pointer)
> > > >>
> > > >> Yes, the downside of using SLAB_DESTROY_BY_RCU is that you really
> > > >> cannot initialize some fields in the allocation path, because they may
> > > >> end up being still used while allocating a new (well, re-used) entry.
> > > >>
> > > >> However, I think that in the long run we pretty much _have_ to do that
> > > >> anyway, because the "free each inode separately with RCU" is a real
> > > >> overhead (Nick reports 10-20% cost). So it just makes my skin crawl to
> > > >> go that way.
> > > >
> > > > This is a creat/unlink loop on a tmpfs filesystem. Any real filesystem
> > > > is going to be *much* heavier in creat/unlink (so that 10-20% cost would
> > > > look more like a few %), and any real workload is going to have much
> > > > less intensive pattern.
> > >
> > > So to get some more precise numbers, on a new kernel, and on a nehalem
> > > class CPU, creat/unlink busy loop on ramfs (worst possible case for inode
> > > RCU), then inode RCU costs 12% more time.
> > >
> > > If we go to ext4 over ramdisk, it's 4.2% slower. Btrfs is 4.3% slower, XFS
> > > is about 4.9% slower.
> >
> > That is actually significant because in the current XFS performance
> > using delayed logging for pure metadata operations is not that far
> > off ramdisk results. Indeed, the simple test:
> >
> > while (i++ < 1000 * 1000) {
> > int fd = open("foo", O_CREAT|O_RDWR, 777);
> > unlink("foo");
> > close(fd);
> > }
> >
> > Running 8 instances of the above on XFS, each in their own
> > directory, on a single sata drive with delayed logging enabled with
> > my current working XFS tree (includes SLAB_DESTROY_BY_RCU inode
> > cache and XFS inode cache, and numerous other XFS scalability
> > enhancements) currently runs at ~250k files/s. It took ~33s for 8 of
> > those loops above to complete in parallel, and was 100% CPU bound...
>
> David,
>
> This is 30K inodes per second per CPU, versus nearly 800K per second
> number that I measured the 12% slowdown with. About 25x slower.
Hi Nick, the ramfs (800k/12%) numbers are not the context I was
responding to - you're comparing apples to oranges. I was responding to
the "XFS [on a ramdisk] is about 4.9% slower" result.
> How you
> are trying to FUD this as doing anything but confirming my hypothesis, I
> don't know and honestly I don't want to know so don't try to tell me.
Hardly FUD. I thought it important to point out that your
filesystem-on-ramdisk numbers are not theoretical at all - we can
acheive the same level of performance on a single SATA drive for
this workload on XFS. Therefore, the 5% difference in performance
you've measured on a ramdisk will definitely be visible in the real
world and we need to consider it in that context, not as a
"theoretical concern".
Cheers,
Dave.
--
Dave Chinner
david@...morbit.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists