lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 17 Nov 2010 12:12:54 +1100
From:	Dave Chinner <david@...morbit.com>
To:	Nick Piggin <npiggin@...nel.dk>
Cc:	Nick Piggin <npiggin@...il.com>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Eric Dumazet <eric.dumazet@...il.com>,
	Al Viro <viro@...iv.linux.org.uk>,
	linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org
Subject: Re: [patch 1/6] fs: icache RCU free inodes

On Tue, Nov 16, 2010 at 02:49:06PM +1100, Nick Piggin wrote:
> On Tue, Nov 16, 2010 at 02:02:43PM +1100, Dave Chinner wrote:
> > On Mon, Nov 15, 2010 at 03:21:00PM +1100, Nick Piggin wrote:
> > > This is 30K inodes per second per CPU, versus nearly 800K per second
> > > number that I measured the 12% slowdown with. About 25x slower.
> > 
> > Hi Nick, the ramfs (800k/12%) numbers are not the context I was
> > responding to - you're comparing apples to oranges. I was responding to
> > the "XFS [on a ramdisk] is about 4.9% slower" result.
> 
> Well xfs on ramdisk was (85k/4.9%).

How many threads? On a 2.26GHz nehalem-class Xeon CPU, I'm seeing:

threads		files/s
 1		 45k
 2		 70k
 4		130k
 8		230k

With scalability mainly limited by the dcache_lock. I'm not sure
what you 85k number relates to in the above chart. Is it a single
thread number, or something else? If it is a single thread, can you
run you numbers again with a thread per CPU?

> A a lower number, like 30k, I would
> expect that should be around 1-2% perhaps. And when in the context of a
> real workload that is not 100% CPU bound on creating and destroying a
> single inode, I expect that to be well under 1%.

I don't think we are comparing apples to apples. I cannot see how you
can get mainline XFS to sustain 85kfiles/s/cpu across any number of
CPUs, so lets make sure we are comparing the same thing....

> Like I said, I never disputed a potential regression, but I have looked
> for workloads that have a detectable regression and have not found any.
> And I have extrapolated microbenchmark numbers to show that it's not
> going to be a _big_ problem even in a worst case scenario.

How did you extrapolate the numbers?

Cheers,

Dave.
-- 
Dave Chinner
david@...morbit.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ