lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 19 Aug 2009 12:19:34 -0700
From:	"Nicholas A. Bellinger" <nab@...ux-iscsi.org>
To:	Gregory Haskins <gregory.haskins@...il.com>
Cc:	Avi Kivity <avi@...hat.com>, Ingo Molnar <mingo@...e.hu>,
	Anthony Liguori <anthony@...emonkey.ws>, kvm@...r.kernel.org,
	alacrityvm-devel@...ts.sourceforge.net,
	linux-kernel@...r.kernel.org, netdev@...r.kernel.org,
	"Michael S. Tsirkin" <mst@...hat.com>,
	"Ira W. Snyder" <iws@...o.caltech.edu>,
	Joel Becker <joel.becker@...cle.com>
Subject: Re: [PATCH v3 3/6] vbus: add a "vbus-proxy" bus model for
	vbus_driver objects

On Wed, 2009-08-19 at 14:39 -0400, Gregory Haskins wrote:
> Hi Nicholas
> 
> Nicholas A. Bellinger wrote:
> > On Wed, 2009-08-19 at 10:11 +0300, Avi Kivity wrote:
> >> On 08/19/2009 09:28 AM, Gregory Haskins wrote:
> >>> Avi Kivity wrote:
> > 
> > <SNIP>
> > 
> >>> Basically, what it comes down to is both vbus and vhost need
> >>> configuration/management.  Vbus does it with sysfs/configfs, and vhost
> >>> does it with ioctls.  I ultimately decided to go with sysfs/configfs
> >>> because, at least that the time I looked, it seemed like the "blessed"
> >>> way to do user->kernel interfaces.
> >>>    
> >> I really dislike that trend but that's an unrelated discussion.
> >>
> >>>> They need to be connected to the real world somehow.  What about
> >>>> security?  can any user create a container and devices and link them to
> >>>> real interfaces?  If not, do you need to run the VM as root?
> >>>>      
> >>> Today it has to be root as a result of weak mode support in configfs, so
> >>> you have me there.  I am looking for help patching this limitation, though.
> >>>
> >>>    
> >> Well, do you plan to address this before submission for inclusion?
> >>
> > 
> > Greetings Avi and Co,
> > 
> > I have been following this thread, and although I cannot say that I am
> > intimately fimilar with all of the virtualization considerations
> > involved to really add anything use to that side of the discussion, I
> > think you guys are doing a good job of explaining the technical issues
> > for the non virtualization wizards following this thread.  :-)
> > 
> > Anyways, I was wondering if you might be interesting in sharing your
> > concerns wrt to configfs (conigfs maintainer CC'ed), at some point..?
> 
> So for those tuning in, the reference here is the use of configfs for
> the management of this component of AlacrityVM, called "virtual-bus"
> 
> http://developer.novell.com/wiki/index.php/Virtual-bus
> 
> > As you may recall, I have been using configfs extensively for the 3.x
> > generic target core infrastructure and iSCSI fabric modules living in
> > lio-core-2.6.git/drivers/target/target_core_configfs.c and
> > lio-core-2.6.git/drivers/lio-core/iscsi_target_config.c, and have found
> > it to be extraordinarly useful for the purposes of a implementing a
> > complex kernel level target mode stack that is expected to manage
> > massive amounts of metadata, allow for real-time configuration, share
> > data structures (eg: SCSI Target Ports) between other kernel fabric
> > modules and manage the entire set of fabrics using only intrepetered
> > userspace code.
> 
> I concur.  Configfs provided me a very natural model to express
> resource-containers and their respective virtual-device objects.
> 
> > 
> > Using the 10000 1:1 mapped TCM Virtual HBA+FILEIO LUNs <-> iSCSI Target
> > Endpoints inside of a KVM Guest (from the results in May posted with
> > IOMMU aware 10 Gb on modern Nahelem hardware, see
> > http://linux-iscsi.org/index.php/KVM-LIO-Target), we have been able to
> > dump the entire running target fabric configfs hierarchy to a single
> > struct file on a KVM Guest root device using python code on the order of
> > ~30 seconds for those 10000 active iSCSI endpoints.  In configfs terms,
> > this means:
> > 
> > *) 7 configfs groups (directories), ~50 configfs attributes (files) per
> > Virtual HBA+FILEIO LUN
> > *) 15 configfs groups (directories), ~60 configfs attributes (files per
> > iSCSI fabric Endpoint
> > 
> > Which comes out to a total of ~220000 groups and ~1100000 attributes
> > active configfs objects living in the configfs_dir_cache that are being
> > dumped inside of the single KVM guest instances, including symlinks
> > between the fabric modules to establish the SCSI ports containing
> > complete set of SPC-4 and RFC-3720 features, et al.
> > 
> > Also on the kernel <-> user API interaction compatibility side, I have
> > found the 3.x configfs enabled code adventagous over the LIO 2.9 code
> > (that used an ioctl for everything) because it allows us to do backwards
> > compat for future versions without using any userspace C code, which in
> > IMHO makes maintaining userspace packages for complex kernel stacks with
> > massive amounts of metadata + real-time configuration considerations.
> > No longer having ioctl compatibility issues between LIO versions as the
> > structures passed via ioctl change, and being able to do backwards
> > compat with small amounts of interpreted code against configfs layout
> > changes makes maintaining the kernel <-> user API really have made this
> > that much easier for me.
> > 
> > Anyways, I though these might be useful to the discussion as it releates
> > to potental uses of configfs on the KVM Host or other projects that
> > really make sense, and/or to improve the upstream implementation so that
> > other users (like myself) can benefit from improvements to configfs.
> > 
> > Many thanks for your most valuable of time,
> 
> Thank you for the explanation of your setup.
> 
> Configfs mostly works for the vbus project "as is".  As Avi pointed out,
> I currently have a limitation w.r.t. perms.  Forgive me if what I am
> about to say is overly simplistic.  Its been quite a few months since I
> worked on the configfs portion of the code, so my details may be fuzzy.
> 
> What it boiled down to is I need is a way to better manage perms

I have not looked at implementing this personally, so I am not sure how
this would look in fs/configfs/ off the top of my head..  Joel, have you
had any thoughts on this..?

>  (and to
> be able to do it cross sysfs and configfs would be ideal).
> 

I had coded up a patch last year to to allow configfs to access sysfs
symlinks in the context of target_core_mod storage object (Linux/SCSI,
Linux/Block, Linux/FILEIO) registration, which did work but ended up not
really making sense and was (thankully) rejected by GregKH, more of that
discussion here:

http://linux.derkeiler.com/Mailing-Lists/Kernel/2008-10/msg06559.html

I am not sure if the sharing of permissions between sysfs and configfs
would run into the same types of limitiations as the above..

> For instance, I would like to be able to assign groups to configfs
> directories, like /config/vbus/devices, such that
> 
> mkdir /config/vbus/devices/foo
> 
> would not require root if that GID was permitted.
> 
> Are there ways to do this (now, or in upcoming releases)?  If not, I may
> be interested in helping to add this feature, so please advise how best
> to achieve this.
> 

Not that I am aware of.  However, I think this would be useful for
generic configfs, and I think user/group permissions on configfs
groups/dirs and attribute/items would be quite useful for the LIO 3.x
configfs enabled generic target engine.

Many thanks for your most valuable of time,

--nab

> Kind Regards,
> -Greg
> 


--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ