[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <C48DD624-6EDF-4EED-B474-8BEA021F00F0@alex.org.uk>
Date: Thu, 15 Sep 2016 13:39:11 +0100
From: Alex Bligh <alex@...x.org.uk>
To: Christoph Hellwig <hch@...radead.org>
Cc: Alex Bligh <alex@...x.org.uk>, Wouter Verhelst <w@...r.be>,
"nbd-general@...ts.sourceforge.net"
<nbd-general@...ts.sourceforge.net>, linux-block@...r.kernel.org,
Josef Bacik <jbacik@...com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
mpa@...gutronix.de, kernel-team@...com
Subject: Re: [Nbd] [RESEND][PATCH 0/5] nbd improvements
> On 15 Sep 2016, at 13:36, Christoph Hellwig <hch@...radead.org> wrote:
>
> On Thu, Sep 15, 2016 at 01:33:20PM +0100, Alex Bligh wrote:
>> At an implementation level that is going to be a little difficult
>> for some NBD servers, e.g. ones that fork() a different process per
>> connection. There is in general no IPC to speak of between server
>> instances. Such servers would thus be unsafe with more than one
>> connection if FLUSH is in use.
>>
>> I believe such servers include the reference server where there is
>> process per connection (albeit possibly with several threads).
>>
>> Even single process servers (including mine - gonbdserver) would
>> require logic to pair up multiple connections to the same
>> device.
>
> Why? If you only send the completion after your I/O syscall returned
> your are fine if fsync comes from a difference process, no matter
> if you're using direct or buffered I/O underneath.
That's probably right in the case of file-based back ends that
are running on a Linux OS. But gonbdserver for instance supports
(e.g.) Ceph based backends, where each connection might be talking
to a completely separate ceph node, and there may be no cache
consistency between connections.
--
Alex Bligh
Powered by blists - more mailing lists