[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e7ca40f70706040739n72ba34e5v61555bd2cb8eddd1@mail.gmail.com>
Date: Mon, 4 Jun 2007 10:39:20 -0400
From: "Aaron Wiebe" <epiphani@...il.com>
To: linux-kernel@...r.kernel.org
Subject: Re: slow open() calls and o_nonblock
Sorry for the unthreaded responses, I wasn't cc'd here, so I'm
replying to these based on mailing list archives....
Al Viro wrote:
> BTW, why close these suckers all the time? It's not that kernel would
> be unable to hold thousands of open descriptors for your process...
> Hash descriptors by pathname and be done with that; don't bother with
> close unless you decide that you've got too many of them (e.g. when you
> get a hash conflict).
A valid point - I currently keep a pool of 4000 descriptors open and
cycle them out based on inactivity. I hadn't seriously considered
just keeping them all open, because I simply wasn't sure how well
things would go with 100,000 files open. Would my backend storage
keep up... would the kernel mind maintaining 100,000 files open over
NFS?
The majority of the files would simply be idle - I would be keeping
file handles open for no reason. Pooling allows me to substantially
drop the number of opens I require, but I am hesitant to blow the pool
size to substantially higher numbers. Can anyone shed light on any
issues that may come up with a massive pool size, such as 128k?
-Aaron
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists