[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <71a0d6ff0910110508t11da5f62x34a7a8886c087a0b@mail.gmail.com>
Date: Sun, 11 Oct 2009 15:08:47 +0300
From: Alexander Shishkin <alexander.shishckin@...il.com>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: linux-fsdevel@...r.kernel.org, viro@...iv.linux.org.uk,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] [RESEND] RFC: List per-process file descriptor
consumption when hitting file-max
2009/7/30 Andrew Morton <akpm@...ux-foundation.org>:
> If there's some reason why the problem is particularly severe and
> particularly hard to resolve by other means then sure, perhaps explicit
> kernel support is justified. But is that the case with this specific
> userspace bug?
Well, this can be figured by userspace by traversing procfs and
counting entries of fd/ for each, but that is likely to require more
available file descriptors and given we are at the point when the
limit is hit, this may not work. There is, of course, a good chance
that the process that tried to open the one-too-many descriptor is
going to crash upon failing to do so (and thus free a bunch of
descriptors), but that is going to create more confusion. Most of the
time, the application that crashes when file-max is reached is not the
one that ate them all.
So, all in all, in certain cases there's no other way to figure out
who was leaking descriptors.
Regards,
--
Alex
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists