lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 8 Oct 2010 09:48:21 -0400
From:	Christoph Hellwig <hch@...radead.org>
To:	Eric Dumazet <eric.dumazet@...il.com>
Cc:	Christoph Hellwig <hch@...radead.org>,
	Al Viro <viro@...IV.linux.org.uk>,
	Dave Chinner <david@...morbit.com>,
	linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 15/18] fs: introduce a per-cpu last_ino allocator

On Fri, Oct 08, 2010 at 12:20:19PM +0200, Eric Dumazet wrote:
> If iunique() was scalable, sockets could use it, so that we can have
> hard guarantee two sockets on machine dont have same inum.
> 
> A reasonable compromise here is to use a simple and scalable allocator,
> and take the risk two sockets have same inum.
> 
> While it might break some applications playing fstats() games, on
> sockets, current schem is vastly faster.
> 
> I worked with machines with millions of opened socket concurrently,
> iunique() was not an option, and application didnt care of possible inum
> clash.

The current version of iuniqueue is indeed rather suboptimal.  As is
the pure counter approach.  I think the right way to deal with it
is to use an idr allocator.  This means the filesystem needs to
explicitly free the inode number when the inode is gone, but that
just makes the usage more clear.  Together with the lazy assignment
scheme for synthetic filesystems that should give us both speed and
correctness.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ