[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20070424145525.GA18918@infradead.org>
Date: Tue, 24 Apr 2007 15:55:25 +0100
From: Christoph Hellwig <hch@...radead.org>
To: David Chinner <dgc@....com>
Cc: Christoph Hellwig <hch@...radead.org>, gnb@....com,
xfs-oss <xfs@....sgi.com>, linux-fsdevel@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: review: don't block non-blocking writes when frozen
On Tue, Apr 24, 2007 at 09:13:37AM +1000, David Chinner wrote:
> So, given the catch-22 you've just presented us can we revisit the
> nfsd non-blocking I/O issue again? This affects anyone using DM
> snapshots on their NFS servers and has nothing to do with HSMs
> or DMAPI...
>
> FWIW, you can still do non-blocking userspace I/O to a file, so this
> XFS patch is still valid for mainline (that's how I tested it).
I've been investigating the situation a little bit more, and here's
what's going on:
- SuS allows for O_NONBLOCK on regular files as per
http://www.opengroup.org/onlinepubs/007908799/xsh/write.html
- actually implementing O_NONBLOCK semantics for regular fixes
breaks userspace when poll/select claims files are ready to
read/write but they aren't, see http://lkml.org/lkml/2004/10/17/17
So we can't really expose O_NONBLOCK on regular files to userspace,
and we need to make sure in common code this does not happen.
EJUKEBOX on snaphots does make sense, though. Can you please send
a full patchseries for nfsd, the common code and the xfs writepath
so that this actually gets used and behaviour is consistant for all
(or at least most) filesystems?
Also now that the patch goes to mainline please kill ugly FILP_DELAY_FLAG
and just check the flags directly. And it should probably only check
O_NONBLOCK. The only architecture having O_NDELAY different from
O_NONBLOCK is sparc, and it already translates the value for us.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists