[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160615070022.GA3787@grep.be>
Date: Wed, 15 Jun 2016 09:00:22 +0200
From: Wouter Verhelst <w@...r.be>
To: Markus Pargmann <mpa@...gutronix.de>
Cc: Pranay Srivastava <pranjas@...il.com>,
nbd-general@...ts.sourceforge.net, linux-kernel@...r.kernel.org
Subject: Re: [Nbd] [PATCH v2 4/5]nbd: make nbd device wait for its users.
On Wed, Jun 15, 2016 at 08:30:45AM +0200, Markus Pargmann wrote:
> Thanks for the explanations. I think my understanding was off by one ;)..
> I didn't realize that the DO_IT thread from the userspace has the block
> device open as well.
Obviously, otherwise it couldn't do an ioctl() to it :-)
> I thought a bit about this, does it make sense to delay the essential
> cleanup steps until really all open file handles were closed? So that
> even if the DO_IT thread exits, the block device is still there. Only if
> the file is closed everything is cleaned up. Maybe this makes the code
> simpler and we can directly use krefs without any strange constructs.
> What do you think?
>
> This would also allow the client to setup a new socket as long as it
> does not close the nbd file handle.
That sounds like the behaviour that I described earlier about possible
retries for userspace...
> Could this behavior be potentially problematic for any client
> implementation?
I don't think it could, but I'm not sure I understand all the details.
What would happen if:
- nbd is connected from pid X, pid Y does NBD_DISCONNECT, pid X hangs
and doesn't exit?
- nbd is connected from pid X, server disconnects while pid Y is trying
to access the device, pid X tries to reconnect but it takes a while?
> Does it solve our other issue with setting up a new sockets for an
> existing nbd blockdevice?
It could, depending.
--
< ron> I mean, the main *practical* problem with C++, is there's like a dozen
people in the world who think they really understand all of its rules,
and pretty much all of them are just lying to themselves too.
-- #debian-devel, OFTC, 2016-02-12
Powered by blists - more mailing lists