lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 7 Oct 2009 12:10:52 +0200
From:	Nick Piggin <npiggin@...e.de>
To:	Jens Axboe <jens.axboe@...cle.com>
Cc:	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	linux-fsdevel@...r.kernel.org,
	Ravikiran G Thirumalai <kiran@...lex86.org>,
	Peter Zijlstra <peterz@...radead.org>,
	Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: [rfc][patch] store-free path walking

On Wed, Oct 07, 2009 at 11:56:57AM +0200, Jens Axboe wrote:
> On Wed, Oct 07 2009, Nick Piggin wrote:
> > Anyway, this is the basics working for now, microbenchmark shows
> > same-cwd lookups scale linearly now too. We can probably slowly
> > tackle more cases if they come up as being important, simply by
> > auditing filesystems etc.
> 
>                                 throughput
> ------------------------------------------------
> 2.6.32-rc3-git          |      561.218 MB/sec
> 2.6.32-rc3-git+patch    |      627.022 MB/sec
> 2.6.32-rc3-git+patch+inc|      969.761 MB/sec
> 
> So better, quite a bit too. Latencies are not listed here, but they are
> also a lot better. Perf top still shows ~95% spinlock time. I did a
> shorter run (the above are full 600 second runs) of 60s with profiling
> and the full 64 clients, this time using -a as well (which generated
> 9.4GB of trace data!). The top is now:
> 
> _spin_lock (92%)
>         path_get (39%)
>                 d_path (59%)
>                 path_init (26%)
>                 path_walk (13%)
>         dput (37%)
>                 path_put (86%)
>                 link_path_walk (13%)
>         __d_path (23%)

path_init, path_walk, and link_path_walk are all non-lockless
variants, so the RCU walk is dropping out in some cases. path_put
will be significantly coming from locked lookups too. It could be
improved by expanding the cases we do lockless walk for (or
allowing a lockless walk to turn into a locked walk part-way
through, rather than restarting the whole thing, which is probably
a very good idea anyway).

d_path and __d_path are... I think dbench doing something stupid.
Although even those could possibly be optimised to avoid d_lock
as well... Although after looking at strace from dbench, I'd
rather take profiles from real workloads before adding complexity
(or even a real samba serving a netbench workload would be
preferable to dbench, I think).

But it's always nice to see numbers and results. Nearly 2x
increase isn't too bad, even if it is still horribly choked.


> And finally, this:
> 
> > +	if (!nd->dentry->d_inode) {
> > +		spin_unlock(&nd->path.dentry->d_lock);
> > +		return -EAGAIN;
> > +	}
> 
> doesn't compile, it wants to be
> 
>         if (!nd->path.dentry->d_inode) {

Ah thanks, forgot to refresh.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ