lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100701035657.GU24712@dastard>
Date:	Thu, 1 Jul 2010 13:56:57 +1000
From:	Dave Chinner <david@...morbit.com>
To:	Nick Piggin <npiggin@...e.de>
Cc:	linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
	John Stultz <johnstul@...ibm.com>,
	Frank Mayhar <fmayhar@...gle.com>
Subject: Re: [patch 00/52] vfs scalability patches updated

On Wed, Jun 30, 2010 at 10:40:49PM +1000, Nick Piggin wrote:
> On Wed, Jun 30, 2010 at 09:30:54PM +1000, Dave Chinner wrote:
> > On Thu, Jun 24, 2010 at 01:02:12PM +1000, npiggin@...e.de wrote:
> > > Performance:
> > > Last time I was testing on a 32-node Altix which could be considered as not a
> > > sweet-spot for Linux performance target (ie. improvements there may not justify
> > > complexity). So recently I've been testing with a tightly interconnected
> > > 4-socket Nehalem (4s/32c/64t). Linux needs to perform well on this size of
> > > system.
> > 
> > Sure, but I have to question how much of this is actually necessary?
> > A lot of it looks like scalability for scalabilities sake, not
> > because there is a demonstrated need...
> 
> People are complaining about vfs scalability already (at least Intel,
> Google, IBM, and networking people). By the time people start shouting,
> it's too late because it will take years to get the patches merged. I'm
> not counting -rt people who have a bad time with global vfs locks.

I'm not denying it that we need to do work here - I'm questioning
the "change everything at once" approach this patch set takes.
You've started from the assumption that everything the dcache_lock
and inode_lock protect are a problem and goes from there.

However, if we move some things out fom under the dcache lock, then
the pressure on the lock goes down and the remaining operations may
not hinder scalability. That's what I'm trying to understand, and
why I'm suggesting that you need to break this down into smaller,
more easily verifable, benchamrked patch sets. IMO, I have no way of
verifying if any of these patches are necessary or not, and I need
to understand that as part of reviewing them...

> > > *** 64 parallel git diff on 64 kernel trees fully cached (avg of 5 runs):
> > >                 vanilla         vfs
> > > real            0m4.911s        0m0.183s
> > > user            0m1.920s        0m1.610s
> > > sys             4m58.670s       0m5.770s
> > > After vfs patches, 26x increase in throughput, however parallelism is limited
> > > by test spawning and exit phases. sys time improvement shows closer to 50x
> > > improvement. vanilla is bottlenecked on dcache_lock.
> > 
> > So if we cherry pick patches out of the series, what is the bare
> > minimum set needed to obtain a result in this ballpark? Same for the
> > other tests?
> 
> Well it's very hard to just scale up bits and pieces because the
> dcache_lock is currently basically global (except for d_flags and
> some cases of d_count manipulations).
> 
> Start chipping away at bits and pieces of it as people hit bottlenecks
> and I think it will end in a bigger mess than we have now.

I'm not suggesting that we should do this randomly. A more
structured approach that demonstrates the improvement as groups of
changes are made will help us evaluate the changes more effectively.
It may be that we need every single change in the patch series, but
there is no way we can verify that with the information that has
been provided.

Cheers,

Dave.
-- 
Dave Chinner
david@...morbit.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ