[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070424001934.GC1663@thunk.org>
Date: Mon, 23 Apr 2007 20:19:34 -0400
From: Theodore Tso <tytso@....edu>
To: "H. Peter Anvin" <hpa@...or.com>
Cc: Eric Hopper <hopper@...ifarious.org>,
Andrew Morton <akpm@...ux-foundation.org>,
linux-kernel@...r.kernel.org
Subject: Re: Question about Reiser4
On Mon, Apr 23, 2007 at 04:53:03PM -0700, H. Peter Anvin wrote:
> Theodore Tso wrote:
> >
> >One of the big problems of using a filesystem as a DB is the system
> >call overheads. If you use huge numbers of tiny files, then each
> >attempt read an atom of information from the DB takes three system
> >calls --- an open(), read(), and close(), with all of the overheads in
> >terms of dentry and inode cache.
> >
>
> Now, to be fair, there are probably a number of cases where
> open/lseek/readv/close and open/lseek/writev/close would be worth doing
> as a single system call. The big problem as far as I can see involves
> EINTR handling; such a system call has serious restartability implications.
Sure, but Hans wants to change /etc/inetd.conf into /etc/inetd.conf.d,
where you have: /etc/inetd.conf.d/telnet/port,
/etc/inetd.conf.d/telnet/protocol, /etc/inetd.conf.d/telnet/wait,
/etc/inetd.conf.d/telnet/userid, /etc/inetd.conf.d/telnet/daemon,
etc. for each individual line in /etc/inetd.conf. (And where each
file might only contains 2-4 characters each: i.e., "23", "tcp",
"root", etc.)
So it's not enough just to collapse open/pread/close into a single
system call; in order to gain back the performance squandered by all
of these itsy-bitsy tiny little files. You want to collapse the
open/pread/close for many of these little files into a single system
call, hence Hans's insistence on sys_reiser4(); otherwise his scheme
doesn't work all that well at all.
- Ted
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists