[<prev] [next>] [day] [month] [year] [list]
Message-ID: <728201270609081637n1ac7fd8med69757e224e098a@mail.gmail.com>
Date: Fri, 8 Sep 2006 18:37:19 -0500
From: "Ram Gupta" <ram.gupta5@...il.com>
To: "linux mailing-list" <linux-kernel@...r.kernel.org>
Subject: suggestion about rlimit for number of open file descriptions
Hi,
I came across a problem regarding the issue of number of open
file descriptors.The scenario is as below.
I have a controlling process which launches other different
apllication including third party ones. It also enforces various
resource limits including number of open file descriptors. This
process forks & reads the resource limits from a configuration files,
applies the resource limits & then execs for the corresponding
application. The process has its own various number of open file
descriptors. If the limit of open file descriptor for the child
application is less than the number of file descriptors of the parent
process, then the child application file can not be opened & exec
fails in this case.
I searched solution for this problem but could not find an existing
way to solve it. I thought of couple of ways to do it. One idea is to
create another flag for clone which makes it not to inherit the open
file descriptors & use clone with that flag instead of fork .The other
approach is to create a new system call & parent process can call that
system call before calling the setrlimit.
I am looking for your opinion which is the better way or if there is
some other better way. Further if this is generic enough issue then I
can submit the patch.
Thanks
Ram Gupta
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists