[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090527073136.GT846@one.firstfloor.org>
Date: Wed, 27 May 2009 09:31:36 +0200
From: Andi Kleen <andi@...stfloor.org>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Andi Kleen <andi@...stfloor.org>, paul@...-scientist.net,
linux-kernel@...r.kernel.org
Subject: Re: [2.6.27.24] Kernel coredump to a pipe is failing
On Tue, May 26, 2009 at 05:29:35PM -0700, Andrew Morton wrote:
> On Wed, 27 May 2009 02:11:04 +0200 Andi Kleen <andi@...stfloor.org> wrote:
>
> > > I dunno. Is this true of all linux filesystems in all cases? Maybe.
> >
> > Assuming one of them is not would you rather want to fix that file system
> > or 10 zillion user programs (including the kernel core dumper) that
> > get it wrong? @)
> >
>
> I think that removing one bug is better than adding one.
>
> Many filesystems will return a short write if they hit a memory
> allocation failure, for example. pipe_write() sure will. Retrying
> is appropriate in such a case.
Sorry but are you really suggesting every program in the world that uses
write() anywhere should put it into a loop? That seems just like really
bad API design to me, requiring such contortions in a fundamental
system call just to work around kernel deficiencies.
I can just imagine the programmers putting nasty comments
about the Linux kernel on top of those loops and they would
be fully deserved.
And the same applies to in-kernel users really.
The memory allocation case more sounds like a bug in these fs and
in pipe.
e.g. the network stack sleeps waiting for memory, perhaps these
file systems should too.
Or it should just always return -ENOMEM. Typically when the
system is badly out of memory you're gonna lose anyways because
a lot of things start failing.
-Andi
--
ak@...ux.intel.com -- Speaking for myself only.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists