lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190902193341.GA28723@kroah.com>
Date:   Mon, 2 Sep 2019 21:33:41 +0200
From:   Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To:     Sagi Grimberg <sagi@...mberg.me>
Cc:     Christoph Hellwig <hch@....de>, linux-nvme@...ts.infradead.org,
        Keith Busch <keith.busch@...el.com>,
        James Smart <james.smart@...adcom.com>,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 3/3] nvme: fire discovery log page change events to
 userspace

On Fri, Aug 30, 2019 at 11:14:39AM -0700, Sagi Grimberg wrote:
> 
> > > > > > > You are correct that this information can be derived from sysfs, but the
> > > > > > > main reason why we add these here, is because in udev rule we can't
> > > > > > > just go ahead and start looking these up and parsing these..
> > > > > > > 
> > > > > > > We could send the discovery aen with NVME_CTRL_NAME and have
> > > > > > > then have systemd run something like:
> > > > > > > 
> > > > > > > nvme connect-all -d nvme0 --sysfs
> > > > > > > 
> > > > > > > and have nvme-cli retrieve all this stuff from sysfs?
> > > > > > 
> > > > > > Actually that may be a problem.
> > > > > > 
> > > > > > There could be a hypothetical case where after the event was fired
> > > > > > and before it was handled, the discovery controller went away and
> > > > > > came back again with a different controller instance, and the old
> > > > > > instance is now a different discovery controller.
> > > > > > 
> > > > > > This is why we need this information in the event. And we verify this
> > > > > > information in sysfs in nvme-cli.
> > > > > 
> > > > > Well, that must be a usual issue with uevents, right?  Don't we usually
> > > > > have a increasing serial number for that or something?
> > > > 
> > > > Yes we do, userspace should use it to order events.  Does udev not
> > > > handle that properly today?
> > > 
> > > The problem is not ordering of events, its really about the fact that
> > > the chardev can be removed and reallocated for a different controller
> > > (could be a completely different discovery controller) by the time
> > > that userspace handles the event.
> > 
> > So?  You will have gotten the remove and then new addition uevent in
> > order showing you this.  So your userspace code knows that something
> > went away and then came back properly so you should be kept in sync.
> 
> Still don't understand how this is ok...
> 
> I have /dev/nvme0 represents a network endpoint that I would discover
> from, it is raising me an event to do a discovery operation (namely to
> issue an ioctl to it) so my udev code calls a systemd script.
> 
> By the time I actually get to do that, /dev/nvme0 represents now a new
> network endpoint (where the event is no longer relevant to). I would
> rather the discovery to explicitly fail than to give me something
> different, so we pass some arguments that we verify in the operation.
> 
> Its a stretch case, but it was raised by people as a potential issue.

Ok, and how do you handle this same thing for something like /dev/sda ?
(hint, it isn't new, and is something we solved over a decade ago)

If you worry about stuff like this, use a persistant device naming
scheme and have your device node be pointed to by a symlink.  Create
that symlink by using the information in the initial 'ADD' uevent.

That way, when userspace opens the device node, it "knows" exactly which
one it opens.  It sounds like you have a bunch of metadata to describe
these uniquely, so pass that in the ADD event, not in some other crazy
non-standard manner.

thanks,

greg k-h

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ