lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150514112304.GT15721@dastard>
Date:	Thu, 14 May 2015 21:23:04 +1000
From:	Dave Chinner <david@...morbit.com>
To:	Linus Torvalds <torvalds@...ux-foundation.org>
Cc:	Al Viro <viro@...iv.linux.org.uk>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	linux-fsdevel <linux-fsdevel@...r.kernel.org>,
	Christoph Hellwig <hch@...radead.org>,
	Neil Brown <neilb@...e.de>
Subject: Re: [RFC][PATCHSET v3] non-recursive pathname resolution & RCU
 symlinks

On Wed, May 13, 2015 at 08:52:59PM -0700, Linus Torvalds wrote:
> On Wed, May 13, 2015 at 8:30 PM, Al Viro <viro@...iv.linux.org.uk> wrote:
> >
> > Maybe...  I'd like to see the profiles, TBH - especially getxattr() and
> > access() frequency on various loads.  Sure, make(1) and cc(1) really care
> > about stat() very much, but I wouldn't be surprised if something like
> > httpd or samba would be hitting getxattr() a lot...
> 
> So I haven't seen samba profiles in ages, but iirc we have more
> serious problems than trying to speed up basic filename lookup.
> 
> At least long long ago, inode semaphore contention was a big deal,
> largely due to readdir().

It still is - it's the prime reason people still need to create
hashed directory structures so that they can get concurrency in
directory operations.  IMO, concurrency in directory operations is a
more important problem to solve than worrying about readdir speed;
in large filesystems readdir and lookup are IO bound operations and
so everything serialises on the IO as it's done with the i_mutex
held....

> And readdir() itself, for that matter - we have no good vfs-level
> readdir caching, so it all ends up serialized on the inode
> semaphore, and it all goes all the way into the filesystem to get
> the readdir data.  And at least for ext4, readdir()
> is slow anyway, because it doesn't use the page cache, it uses
> that good old buffer cache, because of how ext4 does metadata
> journaling etc.

IIRC, ext4 readdir is not slow because of the use of the buffer
cache, it's slow because of the way it hashes dirents across blocks
on disk.  i.e. it has locality issues, not a caching problem.

> Having readdir() caching at the VFS layer would likely be a really
> good thing, but it's hard. It *might* be worth looking at the nfs4
> code to see if we could possibly move some of that code into the vfs
> layer, but the answer is likely "no", or at least "that's incredibly
> painful".

Maybe I'm missing something - what operation would be sped up by
caching readdir data? Are you trying to optimise the ->lookup that
tends to follow readdir by caching individual dirents? Or something
else?

Cheers,

Dave.
-- 
Dave Chinner
david@...morbit.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ