lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 30 Aug 2011 11:19:45 -0700
From:	Greg KH <greg@...ah.com>
To:	Olaf Hering <olaf@...fle.de>
Cc:	KY Srinivasan <kys@...rosoft.com>,
	"devel@...uxdriverproject.org" <devel@...uxdriverproject.org>,
	"gregkh@...e.de" <gregkh@...e.de>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"virtualization@...ts.osdl.org" <virtualization@...ts.osdl.org>
Subject: Re: [PATCH 0000/0046] Staging: hv: Driver cleanup

On Tue, Aug 30, 2011 at 08:04:34PM +0200, Olaf Hering wrote:
> On Tue, Aug 30, Greg KH wrote:
> 
> > > > In my test system, the IDE drives are now discovered twice, once by
> > > > hv_storvsc and once by libata:
> > > 
> > > This is a known (old problem). The way this was handled earlier was to have the 
> > > modprobe rules in place to setup a dependency that would force the load of the
> > > hyper-v driver (blk / stor) ahead of the native driver and if the load of the PV
> > > driver succeeded, we would not load the native driver. In sles11 sp1, we had a rule for 
> > > loading blkvsc. With the merge of blkvsc and storvsc, the only change we need to make
> > > is to have storvsc in the rule (instaed of blkvsc).
> > 
> > Why do we need a rule at all?  Shouldn't the module dependancy stuff
> > handle the autoloading of the drivers properly from the initrd now that
> > the hotplug logic is hooked up properly?
> 
> There is no plan to load hv_vmbus (or xen-platform-pci) earlier than
> native drivers.

Wait, what do you mean by "native drivers"?

Isn't the hv_vmbus drivers the "native drivers" happening here?

Or are you referring to the "emulated-slow-as-hell drivers" that are
used to boot the machine?

> That was the purpose of the modprobe.conf files. Now
> that there is a vmbus, that fact could be checked before any other
> attempt to load drivers is made and hv_vmbus should be loaded and all of
> its devices have to be probed manually by modprobe `cat modulealias`.

I agree with the first part, but no modprobe should ever need to be
done, the hotplug boot process should properly load those modules when
the vmbus devices are seen by the vmbus core and the hotplug events
generated, which in turn calls modprobe, right?

So there should not need to be any special module.conf file changes for
hv systems, with the exception that the "emulated" drivers should be
added to the blacklist.

> > Or is the hotplug code not working correctly?
> 
> There is nothing to hotplug. hv_vmbus has to be loaded first so that it
> can take over the devices. But it seems that there is no shutdown of the
> emulated hardware, thats why the disk "sda" is shown twice.
> 
> I spot a flaw here.

I agree :)

> KY, can hv_vmbus shutdown emulated hardware? At least the disks, because
> cdroms are appearently still be handled by native drivers?

They are?  Ick, why can't the vmbus storage driver see a cdrom device?
It's just a scsi device, right?

thanks

greg k-h
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ