lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <93b5c669-fbcf-e972-fc4d-3b08f1c76299@fb.com>
Date:   Thu, 15 Sep 2016 09:57:25 -0400
From:   Josef Bacik <jbacik@...com>
To:     Wouter Verhelst <w@...r.be>, Alex Bligh <alex@...x.org.uk>
CC:     Christoph Hellwig <hch@...radead.org>,
        "nbd-general@...ts.sourceforge.net" 
        <nbd-general@...ts.sourceforge.net>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        <linux-block@...r.kernel.org>, <mpa@...gutronix.de>,
        <kernel-team@...com>
Subject: Re: [Nbd] [RESEND][PATCH 0/5] nbd improvements

On 09/15/2016 09:17 AM, Wouter Verhelst wrote:
> On Thu, Sep 15, 2016 at 01:44:29PM +0100, Alex Bligh wrote:
>>
>>> On 15 Sep 2016, at 13:41, Christoph Hellwig <hch@...radead.org> wrote:
>>>
>>> On Thu, Sep 15, 2016 at 01:39:11PM +0100, Alex Bligh wrote:
>>>> That's probably right in the case of file-based back ends that
>>>> are running on a Linux OS. But gonbdserver for instance supports
>>>> (e.g.) Ceph based backends, where each connection might be talking
>>>> to a completely separate ceph node, and there may be no cache
>>>> consistency between connections.
>>>
>>> Yes, if you don't have a cache coherent backend you are generally
>>> screwed with a multiqueue protocol.
>>
>> I wonder if the ability to support multiqueue should be visible
>> in the negotiation stage. That would allow the client to refuse
>> to select multiqueue where it isn't safe.
>
> The server can always refuse to allow multiple connections.
>
> I was thinking of changing the spec as follows:
>
> diff --git a/doc/proto.md b/doc/proto.md
> index 217f57e..cb099e2 100644
> --- a/doc/proto.md
> +++ b/doc/proto.md
> @@ -308,6 +308,23 @@ specification, the
>  [kernel documentation](https://www.kernel.org/doc/Documentation/block/writeback_cache_control.txt)
>  may be useful.
>
> +For performance reasons, clients MAY open multiple connections to the
> +same server. To support such clients, servers SHOULD ensure that at
> +least one of the following conditions hold:
> +
> +* Flush commands are processed for ALL connections. That is, when an
> +  `NBD_CMD_WRITE` is processed on one connection, and then an
> +  `NBD_CMD_FLUSH` is processed on another connection, the data of the
> +  `NBD_CMD_WRITE` on the first connection MUST reach permanent storage
> +  before the reply of the `NBD_CMD_FLUSH` is sent.
> +* The server allows `NBD_CMD_WRITE` and `NBD_CMD_FLUSH` on at most one
> +  connection
> +* Multiple connections are not allowed
> +
> +In addition, clients using multiple connections SHOULD NOT send
> +`NBD_CMD_FLUSH` if an `NBD_CMD_WRITE` for which they care in relation to
> +the flush has not been replied to yet.
> +
>  #### Request message
>
>  The request message, sent by the client, looks as follows:
>
> The latter bit (on the client side) is because even if your backend has
> no cache coherency issues, TCP does not guarantee ordering between
> multiple connections. I don't know if the above is in line with what
> blk-mq does, but consider the following scenario:
>
> - A client sends two writes to the server, followed (immediately) by a
>   flush, where at least the second write and the flush are not sent over
>   the same connection.
> - The first write is a small one, and it is handled almost immediately.
> - The second write takes a little longer, so the flush is handled
>   earlier than the second write
> - The network packet containing the flush reply gets lost for whatever
>   reason, so the client doesn't get it, and we fall into TCP
>   retransmits.
> - The second write finishes, and its reply header does not get lost
> - After the second write reply reaches the client, the TCP retransmits
>   for the flush reply are handled.
>
> In the above scenario, the flush reply arrives on the client side after
> a write reply which it did not cover; so the client will (incorrectly)
> assume that the write has reached permanent storage when in fact it may
> not have done so yet.

This isn't an NBD problem, this is an application problem.  The application must 
wait for all writes it cares about _before_ issuing a flush.  This is the same 
as for normal storage as it is for NBD.  It is not NBD's responsibility to 
maintain coherency between multiple requests across connections, just simply to 
act on and respond to requests.

I think changing the specification to indicate that this is the case for 
multiple connections is a good thing, to keep NBD servers from doing weird 
things like sending different connections to the same export to different 
backing stores without some sort of synchronization.  It should definitely be 
explicitly stated somewhere that NBD does not provide any ordering guarantees 
and that is up to the application.  Thanks,

Josef

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ