[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090812173104.GB29966@redhat.com>
Date: Wed, 12 Aug 2009 20:31:04 +0300
From: "Michael S. Tsirkin" <mst@...hat.com>
To: "Ira W. Snyder" <iws@...o.caltech.edu>
Cc: Arnd Bergmann <arnd@...db.de>,
virtualization@...ts.linux-foundation.org, netdev@...r.kernel.org,
kvm@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/2] vhost_net: a kernel-level virtio server
On Wed, Aug 12, 2009 at 10:19:22AM -0700, Ira W. Snyder wrote:
> On Wed, Aug 12, 2009 at 07:03:22PM +0200, Arnd Bergmann wrote:
> > On Monday 10 August 2009, Michael S. Tsirkin wrote:
> >
> > > +struct workqueue_struct *vhost_workqueue;
> >
> > [nitpicking] This could be static.
> >
> > > +/* The virtqueue structure describes a queue attached to a device. */
> > > +struct vhost_virtqueue {
> > > + struct vhost_dev *dev;
> > > +
> > > + /* The actual ring of buffers. */
> > > + struct mutex mutex;
> > > + unsigned int num;
> > > + struct vring_desc __user *desc;
> > > + struct vring_avail __user *avail;
> > > + struct vring_used __user *used;
> > > + struct file *kick;
> > > + struct file *call;
> > > + struct file *error;
> > > + struct eventfd_ctx *call_ctx;
> > > + struct eventfd_ctx *error_ctx;
> > > +
> > > + struct vhost_poll poll;
> > > +
> > > + /* The routine to call when the Guest pings us, or timeout. */
> > > + work_func_t handle_kick;
> > > +
> > > + /* Last available index we saw. */
> > > + u16 last_avail_idx;
> > > +
> > > + /* Last index we used. */
> > > + u16 last_used_idx;
> > > +
> > > + /* Outstanding buffers */
> > > + unsigned int inflight;
> > > +
> > > + /* Is this blocked? */
> > > + bool blocked;
> > > +
> > > + struct iovec iov[VHOST_NET_MAX_SG];
> > > +
> > > +} ____cacheline_aligned;
> >
> > We discussed this before, and I still think this could be directly derived
> > from struct virtqueue, in the same way that vring_virtqueue is derived from
> > struct virtqueue. That would make it possible for simple device drivers
> > to use the same driver in both host and guest, similar to how Ira Snyder
> > used virtqueues to make virtio_net run between two hosts running the
> > same code [1].
> >
> > Ideally, I guess you should be able to even make virtio_net work in the
> > host if you do that, but that could bring other complexities.
>
> I have no comments about the vhost code itself, I haven't reviewed it.
>
> It might be interesting to try using a virtio-net in the host kernel to
> communicate with the virtio-net running in the guest kernel. The lack of
> a management interface is the biggest problem you will face (setting MAC
> addresses, negotiating features, etc. doesn't work intuitively).
That was one of the reasons I decided to move most of code out to
userspace. My kernel driver only handles datapath,
it's much smaller than virtio net.
> Getting
> the network interfaces talking is relatively easy.
>
> Ira
Tried this, but
- guest memory isn't pinned, so copy_to_user
to access it, errors need to be handled in a sane way
- used/available roles are reversed
- kick/interrupt roles are reversed
So most of the code then looks like
if (host) {
} else {
}
return
The only common part is walking the descriptor list,
but that's like 10 lines of code.
At which point it's better to keep host/guest code separate, IMO.
--
MST
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists