lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 8 Mar 2011 21:02:38 +0530
From:	Krishna Kumar2 <krkumar2@...ibm.com>
To:	"Michael S. Tsirkin" <mst@...hat.com>
Cc:	anthony@...emonkey.ws, arnd@...db.de, avi@...hat.com,
	davem@...emloft.net, eric.dumazet@...il.com, horms@...ge.net.au,
	kvm@...r.kernel.org, netdev@...r.kernel.org, rusty@...tcorp.com.au
Subject: Re: [PATCH 2/3] [RFC] Changes for MQ virtio-net

"Michael S. Tsirkin" <mst@...hat.com> wrote on 03/02/2011 03:36:00 PM:

Sorry for the delayed response, I have been sick the last few days.
I am responding to both your posts here.

> > Both virtio-net and vhost need some check to make sure very
> > high values are not passed by userspace. Is this not required?
>
> Whatever we stick in the header is effectively part of
> host/gues interface. Are you sure we'll never want
> more than 16 VQs? This value does not seem that high.

OK, so even constants cannot change?  Given that, should I remove all
checks and use kcalloc?

> > OK, so virtio_net_config has num_queue_pairs, and this gets converted to
> > numtxqs in virtnet_info?
>
> Or put num_queue_pairs in virtnet_info too.

For virtnet_info, having numtxqs is easier since all code that loops needs
only 'numtxq'.

> > Also, vhost has some
> > code that processes tx first before rx (e.g. vhost_net_stop/flush),
>
> No idea why did I do it this way. I don't think it matters.
>
> > so this approach seemed helpful.
> > I am OK either way, what do you
> > suggest?
>
> We get less code generated but also less flexibility.
> I am not sure, I'll play around with code, for now
> let's keep it as is.

OK.

> > Yes, it is a waste to have these vectors for tx ints. I initially
> > thought of adding a flag to virtio_device to pass to vp_find_vqs,
> > but it won't work, so a new API is needed. I can work with you on
> > this in the background if you like.
>
> OK. For starters, how about we change find_vqs to get a structure?  Then
> we can easily add flags that tell us that some interrupts are rare.

Yes. OK to work on this outside this patch series, I guess?

> > vq's are matched between qemu, virtio-net and vhost. Isn't some check
> > required that userspace has not passed a bad value?
>
>
> For virtio, I'm not too concerned: qemu can already easily
> crash the guest :)
> For vhost yes, but I'm concerned that even with 16 VQs we are
> drinking a lot of resources already. I would be happier
> if we had a file descriptor per VQs pair in some way.
> The the amount of memory userspace can use up is
> limited by the # of file descriptors.

I will start working on this approach this week and see how it goes.

> > OK, so define free_unused_bufs() as:
> >
> > static void free_unused_bufs(struct virtnet_info *vi, struct virtqueue
> > *svq,
> > 		 		 		      struct virtqueue *rvq)
> > {
> > 		 /* Use svq and rvq with the remaining code unchanged */
> > }
>
> Not sure I understand. I am just suggesting
> adding symmetrical functions like init/cleanup
> alloc/free etc instead of adding stuff in random
> functions that just happens to be called at the right time.

OK, I will clean up this part in the next revision.

> > I was not sure what is the best way - a sysctl parameter? Or should the
> > maximum depend on number of host cpus? But that results in too many
> > threads, e.g. if I have 16 cpus and 16 txqs.
>
> I guess the question is, wouldn't # of threads == # of vqs work best?
> If we process stuff on a single CPU, let's make it pass through
> a single VQ.
> And to do this, we could simply open multiple vhost fds without
> changing vhost at all.
>
> Would this work well?
>
> > > > -		 		  enum vhost_net_poll_state tx_poll_state;
> > > > +		 		  enum vhost_net_poll_state *tx_poll_state;
> > >
> > > another array?
> >
> > Yes... I am also allocating twice the space than what is required
> > to make it's usage simple.
>
> Where's the allocation? Couldn't find it.

vhost_setup_vqs(net.c) allocates it based on nvqs, though numtxqs is
enough.

Thanks,

- KK

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ