[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150824234611.GV3902@dastard>
Date: Tue, 25 Aug 2015 09:46:11 +1000
From: Dave Chinner <david@...morbit.com>
To: Jeff Moyer <jmoyer@...hat.com>
Cc: Brian Norris <computersforpeace@...il.com>,
Artem Bityutskiy <dedekind1@...il.com>,
Richard Weinberger <richard@....at>,
Dongsheng Yang <yangds.fnst@...fujitsu.com>,
linux-mtd@...ts.infradead.org, linux-fsdevel@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/2] ubifs: Allow O_DIRECT
On Mon, Aug 24, 2015 at 01:19:24PM -0400, Jeff Moyer wrote:
> Brian Norris <computersforpeace@...il.com> writes:
>
> > On Mon, Aug 24, 2015 at 10:13:25AM +0300, Artem Bityutskiy wrote:
> >> Now, some user-space fails when direct I/O is not supported.
> >
> > I think the whole argument rested on what it means when "some user space
> > fails"; apparently that "user space" is just a test suite (which
> > can/should be fixed).
>
> Even if it wasn't a test suite it should still fail. Either the fs
> supports O_DIRECT or it doesn't. Right now, the only way an application
> can figure this out is to try an open and see if it fails. Don't break
> that.
Who cares how a filesystem implements O_DIRECT as long as it does
not corrupt data? ext3 fell back to buffered IO in many situations,
yet the only complaints about that were performance. IOWs, it's long been
true that if the user cares about O_DIRECT *performance* then they
have to be careful about their choice of filesystem.
But if it's only 5 lines of code per filesystem to support O_DIRECT
*correctly* via buffered IO, then exactly why should userspace have
to jump through hoops to explicitly handle open(O_DIRECT) failure?
Especially when you consider that all they can do is fall back to
buffered IO themselves....
Cheers,
Dave.
--
Dave Chinner
david@...morbit.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists