lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20070604122857.1399e3fc.dada1@cosmosbay.com>
Date:	Mon, 4 Jun 2007 12:28:57 +0200
From:	Eric Dumazet <dada1@...mosbay.com>
To:	Andrew Morton <akpm@...ux-foundation.org>
Cc:	Ingo Molnar <mingo@...e.hu>,
	Davide Libenzi <davidel@...ilserver.org>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Ulrich Drepper <drepper@...hat.com>
Subject: Re: [patch 1/2] ufd v1 - unsequential O(1) fdmap core

On Mon, 4 Jun 2007 01:34:49 -0700
Andrew Morton <akpm@...ux-foundation.org> wrote:

> On Mon, 4 Jun 2007 10:09:41 +0200 Ingo Molnar <mingo@...e.hu> wrote:
> 
> > 
> > * Ingo Molnar <mingo@...e.hu> wrote:
> > 
> > > i think this sums it up:
> > > 
> > >  http://www.uwsg.iu.edu/hypermail/linux/kernel/0705.3/2490.html
> > 
> > i mean this mail started it:
> > 
> >   http://linux.derkeiler.com/Mailing-Lists/Kernel/2007-05/msg13070.html
> > 
> > > and some more, with a benchmark as well:
> > > 
> > >  http://linux.derkeiler.com/Mailing-Lists/Kernel/2007-05/msg13685.html
> > 
> 
> Yeah, I remember all that but I don't think it provides a suitable
> description of what all this code is there for - what problem it is
> solving and how it solves it.
> 
> If we just want some pseudo-private fd space for glibc to use then I'd have
> thought that the existing code could be tweaked to do that: top-down
> allocation, start at some high offset, etc.  But apparently there's more
> to it than this.

Goals :
1) libc wants 'private fds'
2) Latencies of get_unused_fd() for huge processes (more than 100.000 file handles)

Point 1) can use a top-down allocation, or use a 'last unused' index.


Point 2) Instead of introducing a *complex* layer, couldnt we improve existing one ?

If the main problem we want to solve is the potentially slow bitmap search, 
we could logically divide the open_fds bitmap into pages (4096*8 = 32768 bits per page on i386/x86_64 arches)

We would have to add a new field in 'struct fdtable', pointer to an array of u32 counters, that would count the number of 'one' bits in each PAGE. This array is tiny : 128 bytes only for 1.000.000 file handles

get_unused_fd() could then use this array to select an appropriate page (a page known to have at least one zero bit), then do a find_next_zero_bit() restricted to at most PAGE_SIZE bytes. Max latency would be similar to vm one when clearing a page. If applications use Point 1) hint (asking kernel one fd, not the POSIX low fd), typical latency will be null.

Old applications (still asking for POSIX "give me the lowest fd crap") would benefit from this, without recompilation or glibc change.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ